In recent years, Thales has invested heavily in new technology through its own research and acquisitions. A total of 7 billion euros has been invested since 2016. One of the technologies in which Thales invests heavily, is artificial intelligence, or AI. However, this poses a number of problems, because there is no regulation or certification yet. This must happen quickly, says CEO Patrice Caine.
For those unfamiliar with the Thales Group, this French company supplies complex hardware and software solutions to governments and infrastructure companies. Examples are defense, secret services, airports and railway companies.
We recently visited Thales to hear from CEO Patrice Caine what the company is doing and what the vision is of AI, a technology in which substantial investments are being made. To this end, Thales works together with the University of Montreal. One of the University’s professors, Yoshua Bengio, was also present during our visit to share his vision on AI. Last year, this professor was still on show at the VPRO to give an explanation about Deep Learning.
Thales AI solutions have to be perfect; there is no room for error.
Thales now invests in AI solutions that are far more advanced than the technology it applies today. The question is how long it will take for Thales products to be delivered with this advanced artificial intelligence. Caine said that if Thales is going to use this kind of advanced AI technology, it has to be absolutely perfect. Given the customers the company serves and the ways in which technology is used, there is often no room for error.
The question here is also what you call AI and what you don’t call it. Thales’ radar solutions can already distinguish a rocket from a bird, or a rocket from an airplane, which is also not unimportant. This is not new. The company has been able to do this for a long time, long before the discussion and hype surrounding AI arose. The question is, therefore, whether we call this AI or not. Professor Bengio adds that the current AI-innovations have already meant a lot to society. Based on the current innovation, we could make our society even better in the next ten years. At the bottom of the line, a current AI is actually still very stupid, according to the professor. It’s all still too dependent on the software and machine learning algorithms. Mistakes and bias are still difficult to prevent at the moment.
That’s where the crux is for Thales, too. Caine states that there is no room for stupidity. When a navy ship detects a flying object with a Thales radar, there is very little time. They want to know as soon as possible if it is a drone, a fighter or maybe a passenger plane. If the AI then makes the wrong assessment, this can have major consequences.
Many AI solutions are currently far from perfect. A good example cited by Thales is Spotify’s AI. You can play a certain playlist or artist and indicate that Spotify can also play similar music or artists. Anyone who has tested this once in a while will hear a well-known song from time to time. But there is also a lot of bad music. The AI does not always make the best choice. At Spotify this is easy to forgive. You just skip to the next song. A train that has to stop, a defense device that has to defend or an aviation organization of a busy airport: such situations are less forgiving. Mistakes can have dramatic consequences.
Thales sees a solution in an AI that can explain its choices.
The big challenge that Thales now faces is to develop an AI that is not so stupid. Developing a flawless AI is not possible at the moment. Thales has therefore taken a good look at the way in which it can get there. The company sees the solution in an AI that can explain how it came to its choice. It’s a bit like maths at school. The final answer got you one point ahead, but the method of how you came to the answer got you three points ahead.
If an AI can explain exactly why it makes certain choices, then it is also much easier to correct any errors and perfect the AI further. This is something that Thales focuses on.
Preventing AI from being misused for bad intentions
If Thales or another company succeeds in making AI less stupid, perhaps by explaining how certain choices were made, it will also become easier to abuse AI for malicious purposes. Drones equipped with AI are seen as one of the biggest problems for the future. Drones could be abused to commit attacks or contract killings based on facial recognition.
The big question is, of course, how to prevent this from happening. Bengio, the University of Montreal professor, is quite unrealistic about this: he sees world peace as the solution. Fortunately, the CEO of Thales is a little more realistic. He thinks that it should certainly be possible to prevent such occurrences with good agreements between countries, e.g. when the United Nations reaches an agreement with all its countries. Thales does not believe in prohibitions, but in regulation and certification.
Worldwide UN agreement or agreement with many countries
Thales would like to see different countries enter into dialogue with each other in order to arrive at a good policy on AI. Caine compares AI with landmines in his presentation. Although you will never get all countries on board to regulate AI, it can be sufficient if 90 percent of them participate. He mentions the United States and Russia as examples of countries that are probably not open to the idea. This also happened when countries agreed to ban landmines. Russia and the United States have never supported that agreement, but because all the other countries have pushed it through, the production and use of landmines are virtually non-existent today.
How AI is to be certified, what standards everyone agrees with, and what that means for the technology sector will have to be decided in the future. It will probably take some time before there is any real clarity about this. In the meantime, Thales does not intend to wait, it has decided to come up with its own “Charter”. In this Charter, the company will describe what they consider to be the responsible and ethical use of digital assets. It will, therefore, contain Thales’s red lines on ethics and digital transformation, not only for military applications but also for use in the business community.
The charter is based on three subjects;
- Trust, trust in what the AI does but especially how it does it;
- Vigilance, what is the ethical impact of AI in our solutions?
- Governance, ethics comes first in everything we do.
Ultimately, the charter amounts to corporate responsibility with AI. The fact that this is so important at Thales can be traced back to the customer base. If many governments and defence equipment are part of your customer base, then you have to be serious about this. If you don’t do that, then this leads to wrong choices with the wrong customers. Then the entire image of the company can come under pressure. Thales wants to be clear about this with this charter.