2 min

Tags in this article

, , ,

In order to find out whether an AI model makes fair predictions, data scientists must be able to study the short-term effects of AI, but also the long-term effects. This is now made possible by Google with the ML-fairness-gym, a number of components which, together, can simulate the operation of algorithms.

The simulations make it possible to observe the functioning of an AI model before it is actually used in the real world. ML-fairness-gym is open-source and available on GitHub, and uses the Gym framework of OpenAI. AI-driven agents communicate with digital environments, with a step-by-step selection of the agent that influences the environment in a certain way. By measuring the impact on the environment, as well as the steps required to make that impact, it becomes clear what the effects will be over time.

Closed form analysis

The approach is an alternative to closed form analysis, which happens with an algorithm that makes a finite number of calculations. With ML-fairness-gym, new calculations are always made after the dataset has already been changed by the model. This means that no static dataset is used anymore, but one that keeps changing step by step. This provides a fairer prediction than using datasets that already exist in the real world, without letting changes take place in the dataset itself.

“We created the ML-fairness-gym framework to help ML practitioners bring simulation-based analysis to their ML systems, an approach that has proven effective in many fields for analyzing dynamic systems, where closed form analysis is difficult,” Google Research software engineer Hansa Srinivasan wrote.