OpenAI launches a tool to detect AI-generated text

Students are realizing that ChatGPT can produce essays as good as (or even better than) what they're capable of.

OpenAI launches a tool to detect AI-generated text
Students work on computers in the computer lounge at the campus of the University of New South Wales in Sydney, Australia, August 4, 2016. REUTERS/Jason Reed AUNI

The backstory: Since November, OpenAI's AI language model ChatGPT has gotten a lot of attention with everything it can do. Students are realizing that ChatGPT can produce essays as good as (or even better than) what they're capable of, so now teachers are scrambling for ways to detect if assignments are from a student or a robot.

More recently: ChatGPT's popularity has sparked some controversy in the academic and professional world. Some love the tech, while others are worried about what it means for the future. One professor at the University of Pennsylvania's Wharton School has made it mandatory for students to use ChatGPT in his classes, while other organizations like New York City public schools have outright banned it. Also, to address the issue of potential misuse, anti-cheat tools like GPTZero have started popping up.

The development: Now, OpenAI has launched its own software tool designed to detect AI-generated text. It can recognize text produced by OpenAI products as well as other AI writing software. It will be available as a web app and come with resources for teachers.

But, OpenAI said that the tool, known as a classifier, has limitations and should also be used with other verification methods. For example, in testing, only 26% of AI-written text was correctly identified when the company tested it out, and even 9% of the human-written text was flagged as being generated by AI. So, it's not foolproof.

Key comments:

"While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human," said OpenAI in a blog post.

"The initial reaction was 'OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,'" said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. "I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power."


"Like many other technologies, it may be that one district decides that it's inappropriate for use in their classrooms," said OpenAI policy researcher Lama Ahmad. "We don't really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them."