Generative AI poised to make substantial impact on DevSecOps
- by nlqip
And having generative AI automatically use safe practices and mechanisms contributes to a more secure coding environment, Robinson says. “The benefits extend to improved code structuring, enhanced explanations and a streamlined testing process, ultimately reducing the testing burden on DevSecOps teams.”
Some developers think that we’re already there. According to a report released in November by Snyk, a code security platform, 76% of technology and security pros say that AI code is more secure than human code.
But, today, at least, that sense of security might be an illusion and a dangerous one at that. As per a Stanford research paper last updated in December, developers who use an AI coding assistant wrote “significantly less secure code” — but were also more likely to believe that they wrote secure code than those who didn’t use AI. Plus, the AI coding tools sometimes suggested insecure libraries and the developers accepted the suggestions without reading the documentation for the components, the researchers said.
Similarly, in Snyk’s own survey, 92% of respondents agreed that AI generates insecure code suggestions at least some of the time, and a fifth said that it generates security problems “frequently.”
However, even though the use of generative AI speeds up code production, only 10% of survey respondents say that they have automated the majority of their security checks and scanning, and 80% say that developers in their organizations bypass AI security policies altogether.
In fact, with the adoption of generative AI coding tools, more than half of organizations have not changed their software security processes. Of those who did, the most common change was more frequent code audits, followed by implementing security automation.
All of this AI-generated code still needs to undergo security testing, says Forrester’s Worthington. In particular, enterprises need to ensure that they have tools in place and integrated to check all the new code and to check the libraries and container images. “We’re seeing more need for DevSecOps tools because of generative AI.”
Generative AI can help the DevSecOps team write documentation, Worthington adds. In fact, generating text was ChatGPT’s first use case. Generative AI is particularly good at creating first drafts of documents and summarizing information.
So, it’s no surprise that Google’s State of DevOps report shows that AI had a 1.5 times impact on organizational performance as a result of improvements to technical documentation. And, according to the CoderPad survey, documentation and API support is the fourth most popular use case for generative AI, with more than a quarter of tech professionals using it for this purpose.
It can work the other way, too, helping developers comb through documentation faster. “When I coded a lot, a lot of my time was spent digging through documentation,” says Ben Moseley, professor of operations research at Carnegie Mellon University. “If I could quickly get to that information, it would really help me out.
Generative AI for testing and quality assurance
Generative AI has the potential to help DevSecOps teams to find vulnerabilities and security issues that traditional testing tools miss, to explain the problems, and to suggest fixes. It can also help with generating test cases.
Some security flaws are still too nuanced for these tools to catch, says Carnegie Mellon’s Moseley. “For those challenging things, you’ll still need people to look for them, you’ll need experts to find them.” However, generative AI can pick up standard errors.
And, according to the CoderPad survey, about 13% of tech professionals already use generative AI for testing and quality assurance. Carm Taglienti, chief data officer and data and AI portfolio director at Insight, expects that we’ll soon see the adoption of generative AI systems custom-trained on vulnerability databases. “And a short-term approach is to have a knowledge base or vector databases with these vulnerabilities to augment my particular queries,” he says.
A bigger question for enterprises will be about automating the generative AI functionality — and how much to have humans in the loop. For example, if the AI is used to detect code vulnerabilities early on in the process. “To what extent do I allow code to be automatically corrected by the tool?” Taglienti asks. The first stage is to have generative AI produce a report about what it sees, then humans can go back and make changes and fixes. Then, by monitoring the tools’ accuracy, companies can start building trust for certain classes of corrections and start moving to full automation. “That’s the cycle that people need to get into,” Taglienti tells CSO.
Similarly, for writing test cases, AI will need humans to guide the process, he says. “We should not escalate permissions to administrative areas — create test cases for that.”
Generative AI also has the potential to be used for interrogating the entire production environment, he says. “Does the production environment comply with these sets of known vulnerabilities related to the infrastructure?” There are already automated tools that check for unexpected changes in the environment or configuration, but generative AI can look at it from a different perspective, he says. “Did NIST change their specifications? Has a new vulnerability been identified?”
Need for internal generative AI policies
Curtis Franklin, principal analyst for enterprise security management at Omdia, says that he talks to development professionals at large enterprises and they’re using generative AI. And so are independent developers and consultants and smaller teams. “The difference is that the large companies have come out with formal policies on how it will be used,” he tells CSO. “With real guidelines on how it must be checked, modified, and tested before any code that passed through generative AI can be used in production. My sense is that this formal framework for quality assurance is not in place at smaller companies because it’s overhead that they can’t afford.”
In the long term, as generative AI code generators improve, they do have the potential to improve overall software security. The problem is that we’re going to hit a dangerous inflection point, Franklin says. “When the generative AI engines and models get to the point where they consistently generate code that’s pretty good, the pressure will be on development teams to assume that pretty good is good enough,” Franklin says. “And it is that point at which vulnerabilities are more likely to slide through undetected and uncorrected. That’s the danger zone.”
As long as developers and managers are appropriately skeptical and cautious, then generative AI will be a useful tool, he says. “When the level of caution drops, it gets dangerous — the same way we’ve seen in other areas, like the attorneys who turned in briefs generated by AI that included citations to cases that didn’t exist.”
Source link
ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde
Recent Posts
- Arm To Seek Retrial In Qualcomm Case After Mixed Verdict
- Jury Sides With Qualcomm Over Arm In Case Related To Snapdragon X PC Chips
- Equinix Makes Dell AI Factory With Nvidia Available Through Partners
- AMD’s EPYC CPU Boss Seeks To Push Into SMB, Midmarket With Partners
- Fortinet Releases Security Updates for FortiManager | CISA