IBM launches toolkit that can detect bias artificial intelligence

Get a free Techzine subscription!

IBM has launched a number of tools that should help companies avoid the artificial intelligence they use is biased. Part of the code is made open-source, which is used by more developers. According to IBM, the technology can help promote the major rollout of AI.

IBM informs ZDNet that it is releasing cloud software designed to manage the deployment of artificial intelligence. The software also helps to detect bias in models. This will help developers and system administrators to reduce the impact of this bias.

Understanding choices

Technology comes at a time when companies are increasingly choosing to roll out artificial intelligence. Decisions are made by machine learning and various models. But it’s often complicated to understand the reasons behind certain choices. IBM’s cloud software works with various frameworks, including AWS SageMaker, AzureML, SparkML, Tensorflow and Watson.

IBM will in any case make the tool that can detect whether AI is making fair decisions open-source and has placed it on GitHub. The AI Fairness 360 toolkit, as it is called, provides algorithms, code and manuals. IBM’s hope is that academics, researchers and data scientists will apply this in their models.

Lack of trust

According to Ritika Gunnar, vice president of IBM Watson Data and AI, the lack of trust in, and transparency of, AI models is the reason why companies have not yet rolled out AI models widely. This is because there are concerns about the decisions that AI makes and the possibility that these choices have a negative impact on companies.

Since IBM is planning to roll out Watson AI as widely as possible and, if possible, to get hold of a large part of the emerging AI market, the launch of this product is only logical. Recent research has shown that 82 percent of companies are considering deploying AI applications, but 60 percent fear reliability issues.

The bias of AI models goes beyond factors such as gender or race. For example, AI used to streamline insurance claims takes into account factors such as the duration of a policy, the value of a vehicle, age and even postcode. Judgments are made on that basis, despite the fact that the AI should look beyond that.

This news article was automatically translated from Dutch to give a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.