2 min

The service, launched on Tuesday, can identify AI-generated text with 98% accuracy. In contrast, OpenAI’s plagiarism detection system works only 26% of the time.

According to Turnitin’s CEO, Chris Caren, the new tool was developed in response to educators’ request to detect AI-written text accurately. “They need to be able to detect AI with very high certainty to assess the authenticity of a student’s work and determine how to best engage with them,” Caren explained.

However, the launch of the new tool has been met with mixed reactions. Some institutions, including Cambridge and other members of the Russell Group, representing leading UK universities, have announced their intention to opt out of the new service.

False positives

There are concerns that the tool may falsely accuse students of cheating. In addition, it involves handing student data to a private company and prevents people from experimenting with new technologies such as generative AI.

As a result, the UCISA, a UK membership body supporting technology in education, has worked with Turnitin to ensure universities can opt out of the feature temporarily. The American Association of Colleges and Universities has also expressed “dubiousness” over the detection system, given rapid developments in AI.

Deborah Green, CEO of UCISA, expressed concern that Turnitin was launching its AI detection system with little warning to students as they prepared coursework and exams this summer. While universities have broadly welcomed the new tool, Green added that they need time to assess it.

Impact on lecturers and universities

Lecturers worried they wouldn’t know why essays have been flagged as being written by AI. In a single university, an error rate of 1% would mean hundreds of students wrongly accused of cheating, with little recourse to appeal, warned Charles Knight, assistant director at consultancy Advance HE.

While Turnitin has not immediately responded to a request for comment on the concerns raised about the AI detection tool, the company stated that the technology had been “in development for years” and provided resources to “help the education community navigate and manage [it]”.

The new tool’s launch has sparked a debate among academics, higher education consultants, and cognitive scientists worldwide over how universities might develop new modes of assessment in response to AI’s threat to academic integrity.

Also read: The jobs most at risk to generative AI like ChatGPT