IBM is now donating three open-source artificial intelligence development toolkits to LF AI. The organization works within the Linux Foundation to maintain open-source machine learning tools. The LF AI Technical Advisory Committee approved the move earlier in June. IBM is in the process of moving the projects to the organization.
Each of the three toolkits serves a different purpose. One of them is designed to help remove bias from AI projects, and the other two focus on neural network security and making output explainable for calculation verification.
The AI Fairness 360 Toolkit contains almost a dozen algorithms to mitigate bias in components that go into the development of a machine learning project. The algorithms can fix the bias in the data that an AI processes, the predictions provided as output, and the model itself.
IBM has additional evaluation metrics for assessing the training dataset used to sharpen a neural network’s capabilities during development stages.
The second tool is called the Adversarial Robustness 360 Toolbox. With it, developers can make their AI models resilient against adversarial attacks, a kind of cyberattack where a hacker inputs malicious code into a neural network to cause an error.
This package has algorithms that harden modes and pre-packaged attacks that developers can use to test their neural networks’ resilience.
The third toolkit is AI Explainability 360 Toolkit. It is the result of the fact that sometimes it is hard explaining why an AI model makes certain decisions because of the inherent complexity of neural networks.
Raghavan, who heads the Research AI group in IBM, said that at IBM, they think of their AI agenda in three pieces. These include advancing, scaling, and trusting AI. The aim is to build AI that is understood, trustworthy, and controllable.