4 min

Tags in this article

, , , ,

AI is one of the technologies that offers tremendous opportunities, but at the same time there are great concerns about it. People often talk about Trustworthy AI when it comes to trusting the technology. This means that a model or AI application considers security, trustworthiness and ethics. We spoke with Reggie Townsend, Vice President of the Data Ethics Practice at SAS, about the path to Trustworthy AI.

In an earlier article we recently published about SAS, we discussed the three critical qualities the company believes are central to AI. In addition to performance and productivity, it’s about trust, exactly the topic we discussed with Townsend. In addition to his work at SAS, he is also active on the National Artificial Intelligence Advisory Committee, which advises the U.S. president around AI.

Trustworthy AI is particularly a big topic because of the significant progression of artificial intelligence. As a result of the development, citizens, governments and organizations see the technology as a new innovation. Because of that image, they worry about the risks. Townsend understands that way of thinking, but mostly thinks it’s just a matter of time until we more readily trust AI  and it becomes something we consider commonplace. Here compares it to electricity. Nowadays we use electricity for a lot of applications, without even thinking about it. For example, you put your plug into an outlet to charge your laptop because you have every confidence in its good intentions. In the early days of humans harnessing electricity, however, it elicited. Simply because many people are so conditioned to fear the negative consequences of the new.

What do we need for Trustworthy AI?

Townsend extends this look to global trust issues currently at play. “In life, after winning trust, it is much easier to keep it, than to try to regain it,” Townsend said. According to surveys Townsend presented,  people distrust the government and media more than businesses.. Companies face less doubt around their ethical actions and competencies, Townsend argues. According to him, this represents an opportunity for responsible innovation for businesses.

At the same time, Townsend points to the arrival of legislation, both from the U.S. and the EU. The EU in particular is currently being closely watched with regard to regulation, such as the AI Act. Opponents fear that such regulation will prevent innovation, potentially putting Europe’s competitiveness at risk. Townsend, however, sees it as a comprehensive opportunity to address socio-technicality. AI affects people, processes and technology – proper legislation  creates consistency and sets guardrails that protect people but allows for innovation, according to Townsend.

It can help build human confidence, with the hope that more people will come to see that AI is there to make humans better. In this regard, Townsend also refers to human-centric AI, or artificial intelligence that empowers humans.

Basis for and with technology

Of course, SAS also plays a big role in this area because its platform provides a foundation for building AI. “Trustworthy AI has to start well before the first line of code is written,” Townsend said, then went on to cite the features of the SAS platform. In the image below, you get an idea of what it offers, but so as not to stray too much into different features, we’ll cover more the vision and steps behind it.

SAS also generally talks about activating Trustworthy AI, which takes into account the fundamental principles for trust. In Townsend’s view, you apply four principles to this: oversight, operations, compliance and culture.

Oversight in this case is primarily about bringing together those in charge within companies and government agencies and letting them deal with dilemmas. Let them assess the AI when the organization procures it, uses it in business processes or already has it in production. They can jointly determine if the AI is not harming people. “In addition, we have control options, focusing on risk management and harmonization of all regulatory agencies,” Townsend adds, “to make sure that we are looking at it from a common interest. And to make sure that we take points of interest and integrate them into technology.”

Finally, the aspect of culture is also important, with people acting on common principles. When employees within the organization take into account norms and values when building and dealing with AI, it creates space for models and systems we can trust. All of this, according to Townsend, leads to technology that improves the world while making money. With that aspiration, Townsend says we are headed for a bright future with lots of AI.