Google arrives in a blog with a Secure AI Framework (SAIF, PDF). With six principles, it hopes to help parties deploy artificial intelligence responsibly.
We already know of several lists of “best practices” that Big Tech companies maintain around AI. In June 2022, for example, Microsoft intended to show (PDF) how it maintained a Responsible AI Standard internally. At Google, however, the main focus is demonstrating how other organizations can handle the technology safely and carefully. Much is familiar territory and applies to more “conventional” software development as well. In addition, Google looks at AI versus AI. After all, bad actors will be just as excited about artificial intelligence’s potential as legitimate companies are.
Do what you already did, and more
Firstly, Google argues that a security foundation is essential in an AI ecosystem. Anyone setting up their proprietary model or, for example, feeding a publicly available large language model (LLM) with unique data must have its surrounding IT infrastructure set up properly. Zero-trust policies should prevent anyone from running off with your model or allowing malicious code injections. Google also talks about the danger to training data with sensitive information. Consider a bank deploying AI to detect fraud for example. In such an instance, the model behind it may then have been fed by sensitive financial data, making it essential that it does not fall into the wrong hands.
Second, it is vital to adhere to extended detection & response. Threat intelligence should detect attacks before they become a problem, while the inputs and outputs of generative AI systems should be monitored. Cyber-attack preparedness is less than desirable anyway, Cisco’s Tom Gillis also saw when we spoke with him in April. AI reinforces the need to really improve in this.
Tip: Not all XDR platforms are created equal: quality telemetry is critical
AI versus AI, new challenges
In addition, AI must also protect its own AI systems. Defence automation is becoming increasingly important as artificial intelligence is democratizing. Malicious actors will also begin to use the technology, although for now, this is not producing earth-shattering changes in cybercrime. It still seems to be mostly about writing phishing emails with the help of ChatGPT, for example. Over time, however, hackers with AI will hunt for other AI models, which requires a dynamic defence structure that responds dynamically to the threat landscape.
Logically, Google also advocates ensuring consistent security governance. AI should not be left out. Furthermore, techniques such as reinforcement learning and human feedback are important to keep an AI model fit for cyber threats. Finally, the company talks about end-to-end risk assessments, which require organizations to look globally and specifically at each step in the AI development process. Examples include checking for data breaches and validating an AI model in operation.
What Google itself is already tinkering with
To keep AI on track, Google does more than just write out frameworks. CEO Sundar Pichai is a familiar face in Brussels and Washington and is suggesting general AI legislation. This keeps internal policies regarding AI development by Google highlighted as well as its external effects. The motive for this should not necessarily be viewed cynically, as a publicly traded company does not like to have to deal with a significant controversy surrounding its own technology. Indeed, the AI battle would lead to unnecessary risks to remain competitive with Google versus a party like OpenAI, requiring more than just best practices for organizations.
Tip: Are Google and OpenAI the right partners to regulate AI?
Google is also specifically communicating with organizations themselves about AI security. For example, it makes information available from research teams at Mandiant and TAG so that threats are kept up-to-date. In addition, it tries to keep its own products secure with various bug bounties. It also partners with GitLab and Cohesity to produce open-source tools that SAIF serves to ensure.