2 min

A set of newly unveiled tools should make models in Azure AI Studio more accurate and secure. Microsoft is addressing a number of important AI vulnerabilities with the update.

The update brings five safety tools to Microsoft Azure AI Studio. First, Prompt Shields should enable organizations to prevent prompt injection attacks. Such an attack allows malicious actors to rewrite the instructions of an AI model, causing an LLM to leak training data, for example, or respond to unwanted prompts that it would otherwise reject. Prompt Shields is available in preview.

Driving AI models

Two other tools focus on the kinds of hazards that currently seem inherent to AI models. Even state-of-the-art LLMs can be at times opaque and inaccurate, but Microsoft hopes to mitigate this issue. Groundedness detections pick up factual inaccuracies in outputs (hallucinations) so developers can later refine their LLM. Safety system messages should allow models to be better controlled for safe outputs as well.

A new safety evaluation allows AI developers to discover if models can be jailbroken and potentially leak data. Microsoft has made this tool available in preview form.

With a newly added risk and security monitoring tool, an even more concrete picture of LLM inputs and outputs can be formed so that AI developers can apply certain content filters.

Broader initiative

The tools answer the demand for more AI safeguards, which has now also been formalised by politicians. In Europe, the EU has created the AI Act, although it risks curbing innovation in pursuit of security.

Read more: AI Act: Europe is blind to the law’s innovation problems

The United States is taking a different course of action to secure AI. In early February, Commerce Secretary Gina Raimondo announced that more than 200 companies had signed up for the Artificial Intelligence Safety Institute Consortium (AISIC). In addition to Microsoft, the participants included Amazon, Google, Meta, Nvidia and OpenAI and many industry stakeholders.

One of the AISAC aims is to foster the development of AI safety tools. AISIC is intended to serve as the platform to establish industry-wide standards that balance innovation and security. For now, steps are limited to specific solutions to get AI secure, such as Azure AI’s offering.

Also read: Microsoft understands that AI is more than a Copilot button