OpenAI has introduced stricter safety rules for how ChatGPT interacts with users under 18, alongside new AI literacy guides for teens and parents, as governments around the world debate tougher standards for protecting minors online.
The updated rules are contained in OpenAI’s revised Model Spec, which outlines how its AI models should behave. The company says the changes are designed to reduce risks to young users at a time when regulators, educators, and child-safety advocates are closely examining the effects of AI chatbots on teenagers.
The move comes amid rising political pressure in the United States. Dozens of state attorneys general recently urged major tech companies to strengthen protections for children using AI tools, while some lawmakers have proposed legislation that would severely limit or even ban minors’ access to AI chatbots. At the federal level, officials are still weighing what a nationwide AI regulatory framework should look like.
Under the new rules, ChatGPT must follow stricter limits when it detects a teenage user. The model is instructed to avoid immersive romantic roleplay, first-person intimacy, and first-person sexual or violent roleplay, even when prompts are framed as fictional or educational. Extra caution is also required around sensitive topics such as body image, eating behaviors, and personal safety.
OpenAI says these safeguards will eventually be supported by an age-prediction system that can identify accounts likely belonging to minors and automatically apply teen-specific protections. The guidelines also emphasize that, when safety and user freedom conflict, the model should prioritize safety and encourage teens to seek real-world support from family members, trusted adults, or professionals.
According to the company, its approach to teen safety is built around four principles: putting safety first, promoting real-world support, communicating with teens respectfully without treating them like adults, and being transparent by reminding users that ChatGPT is an AI, not a human.
The updated Model Spec includes examples of how ChatGPT should explain refusals, such as declining to act as a romantic partner or to assist with risky appearance changes. Legal and child-safety experts have welcomed the clearer boundaries, saying they could help reduce harmful or overly dependent interactions between teens and chatbots.
However, critics caution that written policies do not always translate into real-world behavior. Past versions of OpenAI’s guidelines banned certain problematic behaviors, yet researchers say the chatbot sometimes failed to follow them consistently. Child-safety organizations have also raised concerns about internal tensions within the rules, particularly between safety-focused limits and broader principles encouraging engagement on any topic.
OpenAI says it has improved enforcement by using automated systems that assess text, images, and audio in real time to detect unsafe content, including material related to self-harm or exploitation. In serious cases, flagged interactions may be reviewed by trained staff, and parents could be notified.
The company has also released two new resources aimed at families, offering conversation starters and practical advice to help parents guide teens in using AI responsibly, setting boundaries, and thinking critically about chatbot responses.
Experts say the changes may help OpenAI get ahead of upcoming laws, such as new state regulations that will require AI platforms to disclose child-safety measures and remind minors that they are interacting with a chatbot. Still, analysts stress that the real test will be whether ChatGPT consistently follows these rules in everyday use.
As debates over AI regulation continue, OpenAI’s latest update highlights a broader shift in the industry: growing recognition that protecting young users will be central to the future legal and ethical framework governing artificial intelligence.