New Delhi | Updated 05-08-2024
- OpenAI’s tool can detect AI-generated text with 99.9% effectiveness.
- The tool has been ready for a year but remains unreleased to attract and retain users.
- Concerns include complexity, risk factors, and the potential misuse of the tool if widely available.
OpenAI’s ChatGPT can write, rewrite, and paraphrase any text prompted to it, saving time for many but raising concerns about its use for cheating, particularly in educational settings. Since its inception, ChatGPT has sparked widespread debate about its potential for misuse. However, OpenAI has developed a method to detect when its AI is used to write texts. According to the Wall Street Journal, this detection tool has been technically ready for about a year, but OpenAI has not yet released it.
The delay in releasing this tool is attributed to the need to attract and retain users. A survey conducted by the company found that nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology. Additionally, a survey by the Center for Democracy and Technology found that 59% of middle- and high-school teachers believed some students had used AI to assist with schoolwork, up 17 points from the previous school year.
An OpenAI spokesperson stated that the decision to withhold the tool is due to its complexity and potential risks. The launch could impact the broader ecosystem beyond OpenAI, given the complexities involved.
OpenAI’s anti-cheating tool modifies how ChatGPT selects words or word fragments (tokens) to generate text. This modification introduces a subtle pattern, known as a watermark, into the generated text, allowing for detection of potential cheating or misuse. The watermarks, though undetectable to humans, can be recognized by OpenAI’s detection technology, which assigns a score indicating the likelihood that a document or section was generated by ChatGPT.
Internal documents reveal that the watermarking technique is nearly flawless, achieving a 99.9% effectiveness rate when ChatGPT produces a substantial amount of new text. However, concerns remain that watermarks can be erased through techniques like translating the text into another language and back or adding and then removing emojis.
The primary issue is determining who should have access to the tool if it is released. If too few people have it, the tool wouldn’t be useful. If too many get access, bad actors might decipher the company’s watermarking technique.
While OpenAI has focused on watermarking technologies for audio and visual content due to the potentially more severe consequences of AI-generated multimedia content, such as deepfakes, the text detection tool remains unreleased, leaving the debate about AI usage and detection ongoing.
4o