4 min Applications

Qodo upholds code integrity with AI testing agent

Qodo upholds code integrity with AI testing agent

Qodo (the artist formerly known as CodiumAI before a corporate rebrand) has detailed its generative AI “code integrity” platform. A key part of the software application development lifecycle, code integrity management is a quality assurance function designed to measure the degree to which code is able to execute and perform as intended, without security concerns or other functional issues. Qodo Cover is a fully autonomous AI regression testing agent that generates test suites for software applications… the AI code machines are now starting to self-check and stay clean, this just might be intelligent intelligence. 

The news from Qodo (pronounced KOH-DOH and not to be confused with Nvidia Qoda, the GPU maker’s hybrid quantum-classical computing offering) comes at a time when AI-generated code is becoming increasingly pervasive in software development, with Google now saying that 25% of its new code is AI-generated. 

Ensuring code integrity is of course now more critical than ever and that practice is typically carried out by performing unit testing and regression testing in particular. This process is designed to help verify whether existing functionality remains intact as code evolves… and that factor becomes really important when AI is starting to be given code creation responsibilities.

How regression tests work

Qodo Cover generates regression tests by analysing source code, then validates each test to ensure it runs successfully, that it passes and increases code coverage. Only tests that meet all these criteria are kept, ensuring every generated test adds value. The agentic software in use here can be deployed either as a GitHub action that automatically creates pull requests with suggested unit tests for newly changed code, or as a comprehensive tool that analyses entire repositories to identify and close coverage gaps by extending existing test suites. 

Developers maintain full control over the process by reviewing and selectively accepting generated tests, ensuring they align with project standards and best practices.

“We’re fast approaching a point where the vast majority of code will be AI-generated, fundamentally changing how software is built,” said Itamar Friedman, CEO of Qodo. “It’s critical that we keep up by using AI not just for code generation, but also for maintaining and improving code quality. Qodo Cover represents a step toward autonomous software development by ensuring every piece of code, whether human or AI-written, is properly tested and maintainable.”

AI coders at work, really

Recently, a pull request generated fully autonomously by Qodo Cover containing 15 unit tests was accepted into Hugging Face’s PyTorch Image Models repository – a highly popular machine learning project with over 30,000 GitHub stars and used by more than 40,000 other projects. This demonstrates the solution’s ability to generate production-quality tests that meet the standards of leading open-source projects.

“Built on Qodo’s open source Cover Agent project, Qodo Cover supports all of the popular AI models including Claude 3.5 Sonnet and GPT-4o. It delivers consistently high-quality results across more than a dozen programming languages — including JavaScript, TypeScript, Java, C++, PHP, C#, Ruby, Go, Rust and C — and works with various testing frameworks and coverage reporting tools. Each pull request includes detailed coverage improvement reports, helping teams track their testing progress,” detailed Friedman, in a technical statemet. 

Qodo says that its tool will integrate with its own Qodo Merge and Qodo Gen products to create a suite of tools that work together to ensure code quality throughout the development lifecycle.

This is, um, quite cool

There’s deep (and arguably rather cool) stuff happening here i.e. AI is starting to code (in whatever defined and corralled areas we grant it privileges and policy access to do so) and we’re (thankfully) also applying a good degree of automation intelligence to the same coalface of software application development in order to ensure that we lock down code integrity to help not only control quality and functionality… but also (surely) also keep a tight handle on AI bias and hallucinations in one concerted and intelligently orchestrated way. 

What could do wrong? Well, unless AI code integrity tools start to gain enough self-awareness to skew and eschew things the wrong way, which… if we continue to build this layer of the toolset itself diligently should never be an factor or element that enters their DNA.