2 min

LinkedIn released an open-source software library named the LinkedIn Fairness Toolkit (LiFT). The design of the toolkit enables the measurement of fairness in AI and machine learning models. LinkedIn says that LiFT can train and score the AI and machine learning models, based on biases, input in the data sets. 

From that, the researchers can then score the notions of fairness for these models while getting information about the differences in the selected subgroups.

Fairness in AI, is not easy to define, which would explain why there are so many definitions. Each of them captures the different aspects that make up fairness as a concept. The monitoring models created to look out for these definitions are the first step in creating a fair experience. 

Incredible but limited

Even though the LiFT toolkit includes several provisions to address the fairness-related problems, most do not address the big picture. Some of the large-scale problems are intertwined with highly specific cloud environments.

LiFT can be used in ad hoc fairness analysis or be a part of any A/B testing system for large scales. It is useful in areas like exploratory analysis and production with bias measurement capabilities that can be included in machine learning training stages.

In addition to that, it brings to the forefront a metric-agnostic framework for testing and statistically representing the discernible differences measured across the chosen subgroups.

Simplified deployment

According to LinkedIn, LiFT is reusable because of wrappers and a configuration language that’s designed for deployment. At the topmost level, the library gives users a basic driver program, driven by simple configuration, to make the fairness measurement possible, without requiring one to write new code.

The hope is that this will create a more equitable platform, by weeding out harmful biases in models and ensure that deserving people have equal access to opportunities.