“GPT Store poses threat to privacy and security”

“GPT Store poses threat to privacy and security”

This week, OpenAI launched the long-awaited GPT Store. It also announced ChatGPT Team, which is aimed at small-scale organizations. However, data security is in question, believes Alastair Paterson, CEO of Harmonic Security.

Custom GPTs will now be able to be distributed through the GPT Store. Paterson cites Doc Maker as an example of such an AI model. This GPT can produce all kinds of presentations, spreadsheets and more based on submitted documents.

Tip: OpenAI launches GPT Store and new subscription for small businesses

Paterson discovered that this submitted data temporarily ends up at aidocmaker.com’s servers. Doc Maker (and all other custom GPTs) are accessible only to paying users. Only ChatGPT Plus is somewhat attractive to individual consumers; all other subscribers will be organizations. In other words, those that would like to know where their data ends up.

Entrusting information to third parties

The free ChatGPT service already seems to attract the leaking of data. For example, Samsung employees sent trade secrets to the chatbot in 2023. However, that data remains on OpenAI servers. That may already be undesirable for some companies, but at least it’s clear that data is sent to one specific vendor.

In addition, paid versions of ChatGPT offer guarantees that conversations are not viewable by third parties.

With the introduction of ChatGPT Team and the GPT Store, OpenAI has created more risks than those that existed before, according to Paterson. For example, the most commonly used GPTs will be the most visible, including many AI models that encourage sending data, he argues. This creates a so-called “shadow AI” problem: employees will entrust chatbots with a lot of information without organizations being clear that this is happening.

Tiered security “very disappointing”

Paterson additionally has a big problem with the specific choices OpenAI made regarding the tiering of security features. He provides the example that SSO is only accessible to Enterprise users and calls it “very disappointing.”

Those who choose ChatGPT Team must pay $25 per month per user with an annual billed subscription, monthly costing an additional $5 p/m per user. OpenAI promises these subscribers not to use data to further train AI models. General security measures that an admin can perform are only included in the Enterprise plan.

The exact price of ChatGPT Enterprise varies by company, as that requires an intended customer to contact OpenAI sales. A quote one person on Reddit says they received would amount to $108,000 annually with a minimum of 150 users on the account. However, that figure could vary greatly depending on the organization.

ChatGPT Enterprise is far from the only AI option for businesses. A team of software developers can turn to GitHub Enterprise, for example, while specialized, secure AI solutions can also be refined through a service like Google Vertex AI. For smaller organizations, especially those handling a lot of sensitive data, there is less choice given the price constraints. In this regard, the GPT Store seems to offer much uncertainty for those with a more moderate budget. Still, Paterson argues that AI will require a trade-off for security leaders, weighing the importance of data security and privacy on the one hand, and an overly restrictive attitude toward AI on the other.

Also read: Microsoft’s billion-dollar investment in OpenAI on EU’s radar