In this AI Ecosystem Report, Kieron Allen explores OpenAI’s watermarking system for ChatGPT, its potential impact on education, concerns about AI detection, and the challenges of balancing operational integrity with user trust in GenAI technologies.
Highlights
00:13 — One of the initial fears with ChatGPT came from the academic world. Educators were concerned that students could use the technology to generate essays and other documents. Now, the Wall Street Journal has reported that OpenAI has a system for watermarking ChatGPT-generated text and a tool to detect these watermarks, but it hasn’t yet released it.
00:42 — In an independent survey, the Journal found that people worldwide supported the idea of an AI detection tool by a margin of four to one. In response, OpenAI, in an updated blog post, said the following: “Our teams have developed a text watermarking method that we continue to consider as we research alternatives . . .”
01:32 — The tool could stigmatize the use of AI as a useful writing tool for non-native English speakers. These are legitimate concerns, but another survey of ChatGPT users found that 30% said they would stop using ChatGPT if it included the watermarking feature. So, there are operational and financial concerns, too.