OpenAI Grapples With Unreleased AI Detection Tool Amid Cheating Concerns
OpenAI has developed a sophisticated anticheating tool for detecting AI-generated content, particularly essays and research papers, but has refrained from releasing it due to internal debates and ethical considerations, according to WSJ. This tool, which has been ready for deployment for approximately a year, utilizes a watermarking technique that subtly alters token selection in ChatGPT's output, creating an imperceptible pattern detectable only by OpenAI's technology. While boasting a 99.9% effectiveness rate for substantial AI-generated text, concerns persist regarding potential workarounds and the challenge of determining appropriate access to the detection tool, as well as its potential impact on non-native English speakers and the broader AI ecosystem.
Read more of this story at Slashdot.