OpenAI’s new Dublin office does not resolve any privacy concerns

OpenAI’s new Dublin office does not resolve any privacy concerns

OpenAI recently opened an office in Dublin. In doing so, it intends to control European data through its new Irish subsidiary OpenAI Ireland Limited. It’s a move that has pleased EU regulators before, but doesn’t resolve any privacy concerns.

With OpenAI Ireland Limited, there’s now a European entity acting as a ‘data controller’ on the continent. It is the same tactic employed by Apple, Google, Meta, TikTok and X, as TechCrunch notes. The Dublin office doesn’t represent anything beyond it simply being the headquarters of a legal entity inside the EU. There are only five job openings online for that location, so the company’s previously stated EU plans sound more ambitious than they actually are.

For end users, it means that a new terms of service puts OpenAI Ireland Limited in charge as the data controller for European accounts. The new privacy policy is set to be enacted as of February 15, 2024. Anyone who disagrees with the revised terms, may have their account deleted.

Privacy concerns have barely diminished

A European office isn’t a silver bullet to please all regulators, as other companies have found out. Meta received a $1.3B fine last year for breaching privacy rules. However, it previously avoided a potential 4 billion Euro fine, which was reduced to 390 million by the Irish regulator, the Data Protection Commission (DPC). Since many tech companies are based in Ireland, complaints from Brussels and EU member states tend to be turned over to the DPC for actual enforcement. In this, the organization has more than once shown itself to be lethargic in its actions and overly lenient. For OpenAI, it means that it has less to fear from individual actions by EU countries, such as the temporary blockade imposed by Italy against ChatGPT.

OpenAI can still collect data from customers, just as it has before. The company’s EU privacy policy, drafted on Dec. 15, 2023, continues to define the use and disclosure of data as broadly as it legally can. For example, OpenAI states that it may send data to recipients outside the EU for numerous purposes, including fraud prevention, marketing and optimization of services. A pretty obvious necessity, since the handful of Dublin employees would never be able to handle all that EU data alone. In practice, OpenAI remains an American company in its operations, even if it now legally operates as a European entity as well.

Does ChatGPT Enterprise offer hope to customers?

For ChatGPT Enterprise customers, different rules apply. For example, OpenAI guarantees that it would never use those users’ enterprise data for training purposes. This is because the data controller is the customer itself if they opt for the Enterprise product, not OpenAI. The latter is merely the data processor, so securing the data is up to organizations that purchase this service.

In addition, ChatGPT Enterprise meets SOC 2 compliance, which seems to guarantee a certain level of privacy and security. However, it sounds more bombastic than it is. The SOC 2 criteria are “open to interpretation,” as security party Check Point puts it. They are guidelines, not hard requirements.

The battleground is changing for OpenAI

OpenAI announced plans for a Dublin office in September. Since then, the AI Act has crystallized further, which, among other things, sets the ground rules around training data for AI models such as GPT-4. There’s more cause for concern for OpenAI outside of Europe. After the company struck deals to gain access to the archives of the Associated Press and Axel Springer, The New York Times actually chose to sue for alleged copyright infringement.

That case will currently be a bigger concern for OpenAI than any regulator currently does. These appear more than once to be open to negotiation after first kicking up a fuss about the conduct of many a tech company. For example, Microsoft appears to have found the UK CMA’s inflexible attitude “tough but fair” surrounding its $68 billion acquisition of Activision Blizzard. Similarly, when it comes to privacy, regulators are proving persuasive long after tough negotiations. For example, X has retained protected GDPR status despite having lost numerous employees at the Dublin office, rendering its European operations so ineffective that the European Commission is investigating it for allowing the spread of disinformation. OpenAI is seeking the same approval, after which it is unlikely to lose it. This shows that the privacy rules are too easy to circumvent because with only vague compliance promises and minimal manpower in Europe, OpenAI doesn’t address any actual privacy concerns.