2 min Security

OpenAI warns of cyber risks posed by new AI models

OpenAI warns of cyber risks posed by new AI models

OpenAI announced Wednesday that it is establishing a new advisory board, the Frontier Risk Council, as part of a broader strategy to better manage the risks surrounding advanced AI models. 

This was reported by Reuters. OpenAI warned that the next generation of models could pose a significantly higher cybersecurity risk, as they appear increasingly capable of identifying complex vulnerabilities and even developing new, working zero-day exploits. According to OpenAI, the systems could also contribute to digital intrusions in corporate or industrial environments, with physical effects in the real world.

The Frontier Risk Council is to play a central role in mitigating such threats. This advisory group comprises experienced cybersecurity specialists and other experts in digital defense. They will work closely with OpenAI’s technical teams to help assess risks, align model capabilities, and develop security frameworks for future releases.

The council will start with a focus on cybersecurity, but will eventually be deployed more broadly for other risks arising from increasingly powerful AI models.

Focus on defensive cybersecurity tasks

At the same time, OpenAI is investing in AI functionality designed to support defensive cybersecurity tasks. The company is developing models and tools to support security teams with tasks such as code audits and vulnerability remediation. This defensive approach is complemented by stricter access control, infrastructure hardening, restrictions on outgoing data traffic, and more extensive monitoring to prevent misuse.

In addition, OpenAI is preparing a program that will give certain users and organizations involved in cyber defense access to more advanced AI capabilities, depending on their role and level of trustworthiness. In this way, the company aims to better regulate the availability of powerful features while allowing professional defenders to reap the benefits.

With the launch of the Frontier Risk Council, new defensive tools, and stricter security measures, OpenAI is trying to strike a balance between innovation and risk management. According to the company, the growing impact of advanced AI requires close collaboration with external experts to ensure responsible further development.