3 min Security

GPT-5.4-Cyber aims to further embed AI in cybersecurity

GPT-5.4-Cyber aims to further embed AI in cybersecurity

With the introduction of GPT-5.4-Cyber, OpenAI is taking the next step in the use of generative AI for cybersecurity. While earlier models already assisted with programming and code analysis, this variant focuses explicitly on defensive applications.

The new model variant is part of the Trusted Access for Cyber program, designed to give verified security professionals access to more powerful and less restrictive AI functionality. According to the company, this is necessary because both defenders and attackers are using AI. OpenAI emphasizes that the threat has existed for some time but is accelerating as more advanced models emerge.

What sets GPT-5.4-Cyber apart is its adjusted safety threshold. While standard models are cautious about requests related to hacking, the threshold here is lower for trusted users. This makes it easier for security researchers to analyze vulnerabilities or investigate malware without source code. A spokesperson states that the model is intentionally less likely to block legitimate security work, while abuse must remain limited.

The development is partly driven by practical experience, where existing models already automate parts of the security process. At the same time, OpenAI acknowledges that risks lie not only in the technology but also in its use. Access is therefore linked to identity verification and additional indicators of trustworthiness.

Codex Security as a foundation

The introduction builds on earlier initiatives such as Codex Security, which automatically detects vulnerabilities and proposes solutions. Since its broader rollout, the system has helped resolve more than three thousand critical and severe vulnerabilities, reports SiliconANGLE. Through an open-source program, OpenAI also reaches more than a thousand projects with free security scans.

Notably, OpenAI is moving away from centralized control over who gains access. Instead, the company aims to rely on objective verification and usage signals. According to a spokesperson, manually determining who is allowed to use these tools is not scalable.

The rollout of GPT-5.4-Cyber is phased but on a larger scale than previous initiatives. OpenAI is targeting thousands of security specialists and hundreds of teams. Multiple access levels apply, with only the highest category gaining access to the most permissive variant.

The launch timing suggests more powerful models are expected later this year. OpenAI is developing its systems on the assumption that future models will achieve a high level of cyber capabilities, which requires additional security measures.

That this development is part of a broader trend is evident from competitors’ moves. For example, Anthropic recently introduced its own model with strong cybersecurity capabilities for a limited group of organizations. OpenAI is opting for a broader rollout and targeting a larger user base, adds SiliconANGLE.