2 min Analytics

Microsoft releases tool to check AI systems for threats

Microsoft releases tool to check AI systems for threats

Microsoft has released Counterfit. It is a tool that allows developers to test the security of their AI systems.

According to the tech giant, Counterfit is little more than a generic command-line tool that allows multiple AI systems to be attacked at scale. Microsoft itself uses it to check its own AI systems, but the tool is open source and can be found on GitHub. The tool can be deployed via Azure Shell in a browser or installed locally in an Anaconda Python environment.

Born out of our own need

“This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities with the goal of proactively securing AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative,” Microsoft says in a blogpost.

Agnostic to environments, models and data

According to Microsoft, the tool can examine AI models hosted in any cloud environment, as well as on-premises or in edge networks. Moreover, it can handle all types of AI models. Security professionals therefore do not need to study the workings of AI models, but can instead focus on examining security. Also, Counterfit can handle all types of data, be it text, images or generic input.

Protection against adversial machine learning

One of the purposes for which the tool can be used is to check whether the algorithm can be influenced maliciously. ZDNet gives examples such as fooling Teslas by sticking black tape on speed signs. Another example is Microsoft’s infamous Tay chatbot, which was manipulated to make racist remarks.

Furthermore, the tool can be used to check AI systems for vulnerabilities and create logs in which any attacks can be recorded.

Tip: Microsoft confirms Nuance acquisition for 16.5 billion euros