2 min

OpenAI is getting sued for alleged defamatory “hallucinations” made by ChatGPT. The generative AI chatbot keeps making things up and getting its owner into trouble.

A Georgia radio host, Mark Walters, found that ChatGPT was spreading false information about him. He has therefore sued OpenAI in the company’s first defamation lawsuit, according to a report in Bloomberg Law

Walters had given an impassioned commentary on a gun rights non-profit organisation called the Second Amendment Foundation (SAF). But when journalist Fred Riehl asked ChatGPT to summarize a complaint that SAF filed in federal court, the chatbot generated a completely inaccurate response. It falsely claimed that the case was filed against Walters for embezzlement. That supposedly happened while Walters served as SAF’s chief financial officer and treasurer – a post that he never held.

When Riehl then asked ChatGPT to point to specific paragraphs that mentioned Walters in the subject SAF complaint or provide the full text of the SAF complaint, ChatGPT generated a “complete fabrication”. It “bears no resemblance to the actual complaint, including an erroneous case number,” according to Walters’ lawsuit.

Is OpenAI liable?

Walters’ complaint highlights the fact that OpenAI “is aware that ChatGPT sometimes makes up facts”. Indeed, as the complaint also notes, OpenAI even has a special term for such falsehoods, referring to them as “hallucinations”. So this case could be the one to test companies’ legal liability for their generative AI services.

Eugene Volokh, a UCLA law professor who has studied the legal liability of AI systems, told Ars Technica that arguing that “ChatGPT often does publish false statements generally” could bolster Walters’ case. However, he adds that such an argument may not be sufficient. This is because “you can’t show that a newspaper had knowledge or recklessness as to falsehood just because the newspaper knows that some of its writers sometimes make mistakes”.

Volokh also added that OpenAI might not be shielded by the ChatGPT disclaimers it has in its terms of use.

A publisher “can’t immunize itself from defamation liability by merely saying, on every post, ‘this post may contain inaccurate information’—likewise with OpenAI,” Volokh said.