OpenAI has released GPT-5.2-Codex, a new version of its agentic AI model for software development that focuses specifically on professional software engineering and cybersecurity.
The model builds on GPT-5.2 but has been further optimized to work independently within complex development environments. With this release, OpenAI is positioning Codex not just as a programming assistant but as a broader support technology for the entire software development process.
GPT-5.2-Codex is designed to perform tasks that normally require a lot of time and human coordination. The model can analyze, modify, and maintain large codebases, while summarizing relevant context concisely to continue working consistently over time. This makes it suitable for refactoring, migrations, and other major code changes that often occur in enterprise environments.
According to OpenAI, the model can improve existing applications without adding new functionality, for example by reducing memory usage or optimizing performance.
GPT-5.2-Codex can perform multi-step processes
An important technical difference from previous Codex versions is its improved handling of long and complex tasks. Thanks to context compaction, GPT-5.2-Codex can execute multi-step processes without losing track. Integration with development tools and terminals has also been further refined, making the model more reliable in realistic development scenarios. In addition, explicit attention has been paid to better support for Windows environments, a frequently cited limitation of previous AI development tools.
OpenAI also emphasizes the model’s improved visual capabilities. GPT-5.2-Codex can interpret screenshots, technical diagrams, and user interfaces more accurately and translate that information into working code. This makes it possible to convert software designs and mockups into functional prototypes more quickly, which is particularly relevant for teams in which design and development work closely together.
Cybersecurity is a central part of the announcement. OpenAI states that advances in programming logic and reasoning directly translate into better support for security research.
Earlier this year, a security researcher used an earlier version of Codex to analyze a known React vulnerability. During that process, several new security vulnerabilities were discovered, which were then responsibly reported. According to OpenAI, examples like these show how AI can help accelerate defensive security work, such as bug detection, testing, and mitigation.
At the same time, OpenAI acknowledges that these capabilities also carry risks. That is why the company has opted for a controlled rollout. GPT-5.2-Codex is now available to paying ChatGPT users via the Codex CLI and IDE extensions, among others. Access via the API will be expanded in phases. In addition, OpenAI is launching an invitation program for trusted security professionals, focused on defensive applications within cybersecurity.
On benchmarks that measure realistic software development, such as SWE-Bench Pro and Terminal-Bench 2.0, GPT-5.2-Codex achieves the highest scores to date, according to OpenAI. This underscores the company’s claim that the model is not only theoretically stronger, but also performs better in practice on real development and management tasks.