2 min

The Cloud Security Alliance (CSA) is now expanding its work to encompass AI security. The CSA recently launched the AI Safety Initiative for this purpose.

Many major (cloud) tech companies are collaborating on standards and best practices for cloud security within the CSA. The group argues that the rise of (generative) AI technology creates new challenges in the area of security and standards to be set for it.

The CSA has therefore now established the AI Safety Initiative. In this collaborative effort, all stakeholders, not only the developers of AI models, but also, for example, the cloud giants that provide the necessary infrastructure, will soon be able to work together on more security around AI.

Examples at various scales of this cooperation taking shape can be found among large cloud providers such as Microsoft, AWS and Google on the one hand, and AI specialists such as Anthropic, OpenAI and again Microsoft and Google on the other.

Objectives for the AI Safety Initiative

More specifically, the AI Safety Initiative should ensure the creation and open sharing of reliable guidelines for AI safety. Initially, this primarily applies to generative AI solutions and applications.

The collaboration should concretely lead to tools, templates and knowledge that will enable companies to use (generative) AI in a safe, ethical and compliant manner.

To this end, the CSA’s AI Safety Initiative also wants to work with governments for regulation and streamlining AI safety standards with other industry standards. This is to eliminate potential (usage) risks and give AI a positive impact for all business sectors.

Four Working Groups

The CSA’s AI Safety Initiative has already established four so-called “working groups” in which there is collaboration between all participating parties. These are the AI Technology and Risk Working Group, the AI Governance & Compliance Working Group, the AI Controls Working Group and the AI Organizational Responsibilities Working Group.

In total, more than 1,500 experts now work together in these research groups, says the CSA’s AI Safety Initiative.

Also read: ESA upgrades its security as space becomes susceptible to cybercrime