2 min

France’s tech minister admits that the chatbot violates the GDPR, but still opposes a ban such as Italy has imposed.

France’s Digital Minister Jean-Noël Barrot said he thinks OpenAI’s ChatGPT doesn’t respect the EU’s privacy laws, but nonetheless argued against banning it, according to a report in POLITICO.

Speaking with French news outlet La Tribune, Barrot explained that he opposed legal overreach on philosophical grounds. “First, we saw a wave of ‘technophilia’, where they wanted us to believe that ChatGPT was going to solve all the problems of the world, then a wave of technophobia where we had to impose a moratorium or even ban ChatGPT. Neither position is correct”, he declared.

“Technology is neither good nor bad in itself, it is always at the service of Man”, he added.

France has its own view of AI

Regarding whether France should impose a ban like Italy did, Barrot replied with a resolute “non“. “France’s strategy is simple”, Barrot stated. “The first thing is to be able to master this technology…Once you have mastered the technology rather than being subjected to it, the second step is to frame the innovation so that it complies with the principles [of society]”.

“Obviously AI can render immense services to humanity. But like any technological tool, it presents a number of risks that must be controlled”, he added.

OpenAI responds to Italian ban

Last week Italy’s privacy regulator issued a “temporary” ban on ChatGPT, asserting that OpenAI was not in compliance with the EU’s General Data Protection Regulation (GDPR). Likewise, the decision to ban OpenAI from offering its ChatGPT services in France would not be taken by Barrot’s office, but rather would be up to France’s own data protection authority CNIL to decide. Indeed, the French regulator has already received at least two complaints on the grounds of privacy violations, according to L’Informé.

Meanwhile, it seems Open AI may be responding to Barrot’s call to master the chatbot technology. The US tech company this week told the Italian regulator it would be “more transparent about how it uses personal data for its AI chatbot and improve mechanisms for people to request a correction or deletion of their data”, according to POLITICO.