OpenAI to permit verified adults to generate erotic content on ChatGPT

OpenAI has announced plans to ease restrictions on its ChatGPT chatbot, including allowing verified adult users to generate erotic content.

The change comes under what the company calls its “treat adult users like adults” policy.

The update will be part of a new version of ChatGPT that lets users personalise the AI assistant’s personality. Users will be able to choose from features such as more human-like responses, heavy emoji use, or friend-like behaviour.

The most notable change is expected in December, when OpenAI will introduce more comprehensive age verification systems to allow erotic content for verified adults. The company has not yet disclosed details of how the verification process will work or what additional safeguards will be in place for such content.

In September, OpenAI launched a separate ChatGPT experience for users under 18, automatically filtering out graphic or sexual material. The firm also revealed that it is developing behaviour-based age prediction technology that can estimate whether a user is above or below 18 based on their interactions with the chatbot.

OpenAI CEO Sam Altman addressed the decision in a post on X, stating that strict guardrails to protect users from mental health risks had made ChatGPT “less useful/enjoyable to many users who had no mental health problems.”

The company’s tighter safety measures were introduced earlier this year after the death of Adam Raine, a teenager from California. His parents filed a lawsuit in August, alleging that ChatGPT provided him with explicit advice on how to take his own life. Following the incident, Altman said OpenAI had “been able to mitigate the serious mental health issues.”

The U.S. Federal Trade Commission has also opened an investigation into how AI chatbots, including ChatGPT, might harm children and teenagers.

“Given the seriousness of the issue we wanted to get this right,” Altman said on Tuesday. He added that OpenAI’s updated safety tools now make it possible to relax content restrictions while continuing to address major mental health risks.

Tags: