OpenAI shuttered, a tool that was supposed to tell human writing from AI due to a low accuracy rate. In an (updated) blog, OpenAI said it decided to end its AI classifier as of July 20th. “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company said.
After OpenAI’s ChatGPT burst into the scene and became one of the fastest-growing apps ever, people scrambled to grasp the technology. Several sectors raised the alarm around AI-generated text and art, particularly educators who were worried students would no longer study and just let ChatGPT write their homework. New York schools even banned access to ChatGPT on school grounds amid concerns about accuracy, safety, and cheating.
OpenAI also recently lost its trust and safety leader amid a time when the Federal Trade Commission is investigating OpenAI to see how it vets information and data. OpenAI declined to comment beyond its blog post.
It seems that people themselves will have to distinguish the text, based only on their own experience.
Source: https://entc.com.ua/en/1817-openai-itself-can-t-determine-what-was-written-by-artificial-intelligence