Tech giants and AI developers want to achieve better security for AI applications with the Coalition for Secure AI (CoSAI). This will be done by developing specific tooling and setting up an ecosystem for sharing best practices, among other things.
The security of AI applications is becoming increasingly important, and the tech industry wants to take responsibility for this. Several tech giants and AI developers want to address this security together and have established the CoSAI partnership. The implementation of the CoSAI partnership takes place under the banner of the non-profit organization OASIS Open. This organization deals with various open-source projects, especially in the field of cybersecurity.
Participants include the big three hyperscalers, AWS, Microsoft, and Google, as well as AI developers such as OpenAI, Anthropic, Cohere, and GenLab. Other participants include well-known tech companies such as Nvidia, Intel, IBM, Cisco, PayPal, Wiz, and Chainguard.
The collaboration under CoSAI complements AI development initiatives already underway. The work within this new initiative must be complementary. Therefore, it is indicated that it will consider the work of “OpenSSF, AI Alliance, Cloud Security Alliance, Partnerships on AI and the Frontier Model Forum.
Two goals
The new partnership has two goals. First, it aims to provide companies and organizations with the necessary tooling and technical expertise to secure their AI applications. Second, the alliance wants to create an ecosystem where companies can share their best practices and technology for AI-related cybersecurity.
Three open-source “workstreams,” or projects, are being launched for these purposes. The first project is to help software developers scan their ML workloads for security risks. To this end, a taxonomy of known vulnerabilities and solutions to counter them is being developed. In addition, CoSAI participants also want to develop a “cybersecurity scorecard” for monitoring AI systems for vulnerabilities and reporting them to stakeholders.
The partnership’s second project focuses on countering AI security risks. The goal is to identify investments and migration techniques that may have a security impact on AI use.
CoSAI’s third launched initiative is to pay attention to the risks of supply chain attacks. For example, using software components from public repositories and libraries like GitHub. Checking these software components for vulnerabilities is often a long and complicated process. CoSAI now wants to develop workflows that simplify these processes.
Instilling fear of open-source
In parallel, the alliance will also pay close attention to the risks of using third-party AI models to develop solutions and applications. Consider open-source neural networks to replace building proprietary algorithms that are cost-intensive. However, there is a chance that these open-source models actually bring vulnerabilities and allow hackers to abuse them.
The parties within CoSAI plan to start more projects in the future. All initiatives are overseen by a technical committee of AI experts from academia and the private sector.