6 min Security

Is 46% of your AI-generated code vulnerable?

Is 46% of your AI-generated code vulnerable?

As AI coding assistants become ubiquitous in enterprise development, a concerning statistic has emerged: 46% of AI-generated code contains security vulnerabilities. At KubeCon and CloudNativeCon, Jignesh Patel, field CTO at Harness, discussed how organizations can embrace AI-assisted development without compromising security.

Harness positions itself not as a security company, but as a Software Development Life Cycle (SDLC) platform focused on “everything after code.” This distinction matters because securing AI-generated code against vulnerabilities requires more than point-in-time scanning. It requires integrated governance throughout the entire software delivery lifecycle.

The vulnerability crisis in AI-generated code

Research from Veracode, published three months prior to the interview, revealed that 46% of code generated by AI contains security vulnerabilities. This finding challenges the assumption that AI coding assistants like GitHub Copilot inherently produce secure code.

“AI can help you generate code. Yeah, absolutely, it can. And it does it really well, sometimes too well that you miss critical steps in that,” Patel explained. The speed and volume of AI-generated code can overwhelm traditional review processes, making it easier for vulnerabilities to slip through to production.

The problem extends beyond simple coding errors. Recent supply chain attacks, including vulnerabilities in widely used packages, demonstrate that malicious actors are becoming increasingly sophisticated at exploiting code repositories and dependencies.

Harness’s multi-layered security approach

Harness implements security scanning at multiple critical points in the development lifecycle. The platform integrates directly into large language models, scanning code as it’s generated in the IDE. This proactive approach catches vulnerabilities before they enter the codebase.

The application security testing portfolio includes:

SCA (Software Composition Analysis) scanning: Identifies vulnerabilities in open-source dependencies and third-party libraries that AI models frequently incorporate into generated code.

SAST (Static Application Security Testing): Analyzes source code for security vulnerabilities without executing the program, catching issues like SQL injection, cross-site scripting, and buffer overflows.

DAST (Dynamic Application Security Testing): Tests running applications to identify security vulnerabilities that only appear during execution.

Traceable API security: Protects APIs from attacks and unauthorized access, with reference customers like Home Depot using it to secure their API infrastructure.

Integration with best-of-breed security tools

Rather than forcing enterprises to rely solely on proprietary scanning engines, Harness integrates with leading security platforms including Wiz, Snyk, and Black Duck. This philosophy of “integrating with the best” allows organizations to maintain their existing security toolchain while adding AI-specific protections.

Supporting multiple AI model providers

Harness currently supports the major AI coding platforms including Gemini, Anthropic Claude, AWS Bedrock, and GitHub Copilot. The platform is also moving toward a “bring your own model” approach, allowing enterprises using smaller, specialized, or on-premises models to benefit from the same security protections.

This flexibility matters as organizations experiment with different AI approaches. Some enterprises are deploying smaller, domain-specific models trained on their own codebases, while others use commercial platforms. Harness aims to secure code regardless of its AI source.

The ongoing battle between attackers and defenders

Patel emphasized that AI has become a double-edged sword in cybersecurity. While security teams use AI to identify vulnerabilities and strengthen defenses, malicious actors employ the same technology to discover weaknesses and craft attacks.

“It’s going to be a battle forever,” Patel said. “I don’t think there’s an advantage one way or the other. I think we’re going to have more and more tools, but I think how we deploy those tools, how we use those tools are going to become more important.”

This arms race makes security tooling essential, but Patel stressed that tools alone cannot solve the problem. Proper deployment, configuration, and governance of security solutions matter as much as the capabilities they provide.

The irreplaceable role of human oversight

Despite advanced scanning capabilities, Harness believes that human oversight remains critical. The current best practice recommends that developers read AI-generated code before accepting it into their projects, but Patel acknowledged the practical challenges.

Humans are lazy. No one’s going to go review 5,000 or 10,000 lines of AI generated code because it’s near impossible. This makes peer reviews and code reviews even more important in AI-assisted development environments.

Harness employs a shared responsibility model in which automated tools handle the heavy lifting of vulnerability detection, while humans make the final decisions on code acceptance and remediation strategies. The platform doesn’t try to eliminate human involvement; it enhances human capabilities.

Preventing developers from bypassing security

One critical challenge is preventing developers from circumventing security guardrails. Under pressure to deliver features quickly, developers sometimes comment out security scans or bypass approval processes.

“The most critical thing is to ensure that developers are not bypassing guardrails and commenting out security scanning,” Patel explained. “Because we’ve seen that they have the ability to do it. They’re going to do it because what do developers want to do? They want to deliver fast and move on to the next thing.”

Effective SDLC governance must balance developer velocity with security requirements, making compliance the path of least resistance rather than an obstacle to productivity.

The future of AI-assisted development security

As AI coding assistants evolve, Patel sees opportunities for AI to contribute beyond code generation. One promising application is using AI to automatically generate documentation and code comments, tasks that developers frequently neglect.

“Use AI for that and then it’s easier, more humanly readable,” Patel suggested. Well-documented AI-generated code would be easier for humans to review and maintain, addressing one of the key challenges in AI-assisted development.

Harness is also working with major enterprise platforms like SAP and Salesforce to extend security coverage to their proprietary development languages and deployment models, recognizing that enterprise development extends far beyond traditional programming languages.

The rise of AI-generated code presents both opportunities and risks for enterprise software development. Organizations that implement comprehensive security throughout their SDLC, from code generation through production deployment, can safely harness AI’s productivity benefits while managing its security challenges. As Patel’s insights reveal, this requires a combination of automated tooling, process governance, and human oversight working together across the entire software delivery lifecycle.

Also read: The future of generative AI in software testing