4 min

Tags in this article

, ,

No one will argue anymore that AI offers great opportunities. However, concerns surrounding the concrete deployment are growing, especially when it comes to the reliability of technology. One sector where this dilemma is very evident is healthcare. There are many benefits that can be achieved with the deployment of AI here, but at the same time, untrustworthy AI can actually have major consequences. What is needed to promote and ensure proper deployment of AI within hospitals and organizations? And to what extent does a “human touch” in AI help when human influences can also create bias?

Véronique Van Vlasselaer, Data & Decision Scientist at SAS, and Davy Van De Sande, PHD candidate for Artificial Intelligence in Intensive Care Medicine at Erasmus MC, discussed this at the World Summit AI 2023, the world’s largest and leading AI summit that brings together the top minds in the field of AI. 

‘Bias in’ is ‘bias out’

To what extent can AI be used and trusted within the decision-making process? Véronique Van Vlasselaer explains, “We need to be aware that the data used to develop AI systems is not always neutral. Data comes from the digitization of human processes and behaviors, including our natural bias. This human bias, however well hidden in the data, can be incorporated by AI systems; bias in means bias out.” Understanding where this bias comes from and what role humans play in it, is an important step in making AI more trustworthy.

To illustrate that a dataset with bias can have a major impact on decision-making, Van De Sande used an example from healthcare. An American study showed that black patients in America received 30% less care than they actually needed due to the recommendations of an implemented AI model. Van De Sande: “The AI model used to diagnose patients systematically underestimated the severity of care needed for this demographic group, because the data used to train these algorithms was not a good representation of the population.”

Bias in AI models due to algorithms

In addition to bias in the data, bias in the AI models can also result from the algorithms used to create the AI model. “As humans, we are extremely bad in distinguishing between the universal truth and some exceptions, and this reflects itself in the AI systems we develop.” explains Vlasselaer. “AI tries to find patterns in data by running an algorithm on it. As a data scientist, you are trained to capture the patterns in data as best you can by building the most appropriate AI model possible, trying to avoid so-called ‘over- or underfitting’. An ‘overfitted’ model overreacts to small changes in data, while an ‘underfitted’ model reacts poorly to specific patterns. As a data scientist, you look for the right balance between the two extremes to ensure that the results of your algorithm are usable and are free of ‘noise.’ But what is noise? If exceptional patterns arise from certain characteristics, preferences or interests of a small group, then these patterns, and therefore this group, should not be ignored. So even in the alignment of the model, bias can arise.”

Finally, the decisions people make based on AI outcomes are not neutral due to interpretation bias. “Humans are generally bad at interpreting statistics, so decisions based on AI outcomes may also involve biases,” Vlasselaer said. Social and contextual factors influence how outcomes are interpreted. Let’s take healthcare again as an example, a recent study showed that outcomes can be evaluated completely differently by a healthcare provider depending on gender, age, ethnicity et cetera. Thus, the same outcome of an AI system (e.g. further hospitalization) can have completely different consequences for patients with different skin color, age or, for example, gender. Unfortunately, no standards exist (yet) to resolve this in a systematic way, making human decisions and interpretations a third component for bias in AI. 

Bias, and then what?

Going back to the healthcare example; AI is extremely important in this case to drive innovation and to make impact on patient care that is so desperately needed. However, projects get stuck in the development phase. The lack of implementation can be attributed partly to the significant concerns and mistrust surrounding the use of AI, as the realization that biases also permeate AI becomes more prevalent. However, technology alone does not offer a way out here; the combination between technology and humans is essential in this and will continue to be indispensable in applying AI in systems.

Understanding bias in AI and the human influence on it is an important first step. This also brings with it a responsibility for AI developers and all stakeholders to be aware of their own biases. However, biases are deeply rooted in society, so this wake-up call by itself is not enough. The best solution for mitigating bias in AI, remains the use of human intelligence and more diversity within teams through which biases can be neutralized. So the human element is still imperative in this regard.

This article was submitted by SAS.