2 min Analytics

ChatGPT fixed issue that exposed users’ conversations

ChatGPT fixed issue that exposed users’ conversations

A bug in AI chatbot ChatGPT allowed some users to see the titles of other users’ conversations, the CEO of OpenAI, the company behind the tool, states that bug is now fixed. The error is fixed, but users remain concerned about privacy on the platform.

Since launching in November last year, millions of people have used ChatGPT to draft messages, write songs, and even code. Each conversation with the chatbot is stored in the user’s chat history bar, where it can be revisited later.

However, users began to see conversations in their history that they didn’t have with the chatbot.

Users should be concerned

One user on Reddit shared a photo of their chat history, which included titles like “Chinese Socialism Development,” as well as conversations in Mandarin. OpenAI’s chief executive, Sam Altman, tweeted that the company feels “awful,” but the “significant” error has been rectified.

He also stated that there would be a “technical postmortem” soon. However, the error has drawn concern from users who fear their private information could be exposed through the tool. The glitch appeared to indicate that OpenAI has access to user chats.

The company’s privacy policy states that user data, such as prompts and responses, may be used to continue training the model, but only after personally identifiable information has been removed.

Google is still working on Bard

The mistake comes just a day after Google introduced its chatbot Bard to beta testers and journalists. Microsoft, a significant investor in OpenAI, is competing with Google on bringing generative AI to its solutions.

The pace of new product updates and releases has many concerned that missteps like these could be harmful or have unintended consequences. The incident underscores the need for continued development of AI technology to ensure user privacy and security.