2 min

Tags in this article

, ,

Four individuals familiar with conversations between Musk, Twitter executives, and regulators in Brussels were cited in the report. The demand complicates Musk’s efforts to restructure Twitter, which he acquired for $44 billion in October 2021.

Since then, he has reduced over half of Twitter’s 7,500 staff, including trust and safety teams in some offices, while seeking cost-effective means to monitor tweets.

Currently, Twitter uses a combination of AI and human moderation to detect and review harmful material but does not use fact-checkers, unlike its larger competitor, Meta. Twitter also employs volunteer moderators for its “community notes” feature to address the misinformation flood on its platform.

Community Notes are not enough

In a video call last January, Musk informed European Union industry chief Thierry Breton that he would rely further on Twitter’s AI processes, according to individuals with direct knowledge of the conversations. Breton warned Musk of the “huge work ahead” for Twitter to apply transparent use policies, significantly reinforce content moderation, and safeguard freedom of speech.

Further talks between Twitter and EU regulators regarding moderation plans have occurred recently. Officials acknowledged that the community notes model could eliminate a lot of misleading information, as Wikipedia editors achieve similar outcomes.

Nevertheless, there have been concerns that Twitter needs more than the hundreds of thousands of volunteer editors that Wikipedia has, and that Twitter has a poor record on non-English language content moderation, which is a problem for other social networks.

“Platforms should be under no illusion that cutting costs risks cutting corners in an area that has taken years to develop and refine,” said Adam Hadley, director of Tech Against Terrorism, an UN-backed organization that assists platforms in monitoring extremist content online.

He added, “We are worried about the signal Twitter’s latest move sends to the rest of the industry.”

The European Commission stated that “ensuring sufficient staff is necessary for a platform to respond effectively to the challenges of content moderation, which are particularly complex in the field of hate speech. We expect platforms to ensure the appropriate resources to deliver on their commitments.”