2 min Applications

Researchers find way to identify academic AI writing 99% of the time

Researchers find way to identify academic AI writing 99% of the time

AI-generated academic articles can be differentiated from human writing using a simple set of standardised processes, a study shows.

Can generative AI services like ChatGPT successfully impersonate actual academic writers? A group of US based researchers say no. In fact, there are many telltale signs that can help distinguish AI chatbots from humans. This was emphasized by a study published this week in the journal Cell Reports Physical Science.

Based on these signs, the researchers developed a tool to identify AI-generated academic science writing with over 99% accuracy.

Helping non-techies to spot AI writing

Heather Desaire, a professor at the University of Kansas and lead author of the study, explained the purpose of their project: “We tried hard to create an accessible method so that with little guidance, even high school students could build an AI detector for different types of writing.” She continued in an article published in TechExplore: “There is a need to address AI writing, and people don’t need a computer science degree to contribute to this field”, she added.

Addressing an AI-driven “cultural shift”

ChatGPT has enabled access to artificial intelligence (AI)-generated writing for the masses, initiating a culture shift in the way people work, learn, and write”, the study asserts. “The need to discriminate human writing from AI is now both critical and urgent”.

In order to address this need, the researchers developed a method for discriminating text generated by ChatGPT from (human) academic scientists. This relied on standardised “classification methods”.

The approach uses relatively straightforward features for discriminating humans from AI. For example, scientists write long paragraphs, the study notes. Moreover, actual humans use more “equivocal” language. This means they frequently use words like ‘‘but,’’ ‘‘however,’’ and ‘‘although’ to qualify whatever point they are making. AI, however, does not second-guess itself like that.

With a set of 20 such features, the researchers built a model that assigns the author as human or AI. It supposedly does this at over 99% accuracy. “This strategy could be further adapted and developed by others with basic skills in supervised classification”, they claim, “enabling access to many highly accurate and targeted models for detecting AI usage in academic writing and beyond”.