OpenAI has announced new teen accounts and parental controls for ChatGPT, part of its broader effort to make AI safer for younger people. Teens aged 13 to 17 will be able to use ChatGPT through accounts linked to a parent or guardian, with built-in guardrails designed to filter sensitive content, set time restrictions, and alert parents if the system detects signs of acute distress.
On paper, these features make sense: teens are already experimenting with AI for homework, creative writing, and personal questions. But as with most parental controls, the tools are more of a guardrail than a failsafe. Parents will be able to set blackout hours, disable features like memory, and get notifications in certain situations.
What parents won’t get is peace of mind that these measures alone will keep their kids safe. As a parent of a high schooler (and two older children), I’ve already had conversations with my son about ChatGPT – just as I did about social media and texting. I see these controls as useful, but not a substitute for ongoing oversight and open communication. Realistically, if teens want unrestricted access to ChatGPT, they’ll find workarounds – just like they did with Finsta accounts on Instagram.
How to Set Up a Teen Account
OpenAI says parents will be able to create a teen account by sending an email invitation from their own ChatGPT account. Once linked, the teen’s account automatically follows age-specific policies and limitations. Parents can then:
- Manage features – Decide whether memory and chat history are enabled.
- Set blackout hours – Block access during times like late nights or school hours.
- Adjust response rules – Guide how ChatGPT answers their teen, following model behaviors tailored for under-18s.
- Get alerts in emergencies – If the system detects signs of acute distress, parents will receive a notification. In rare cases where parents can’t be reached, OpenAI may involve emergency services.
This setup creates a more structured experience for teens, though the real work — ongoing conversations about responsible tech use — still falls to parents.
Read more: Lost in Textlation: The Most Misunderstood Acronyms of 2025
The Bigger Question: How Will OpenAI Decide Who’s a Teen?
To enforce these new protections, OpenAI is building an “age prediction” system that estimates whether a person is under 18 based on how they use ChatGPT. If there’s doubt, the system will default to treating the person as a minor unless they provide proof of age. That sounds simple in theory, but it raises thorny privacy issues. Will OpenAI be scanning everyone’s conversations for signs they might be a teen? What happens if the system misclassifies someone, either by failing to flag a struggling teen or by wrongly escalating a conversation to emergency services?
OpenAI is candid that its principles are in tension: teen safety comes first, even at the cost of privacy and freedom. Adults will still be able to opt into more open interactions with ChatGPT, including creative writing scenarios that minors won’t have access to. But when corporations “play it safe,” it’s often our personal freedom that takes the hit. That’s the trade-off at the heart of this announcement: are we comfortable with AI companies making judgment calls about how private conversations should be monitored in the name of protecting teens?
The new parental controls will roll out by the end of the month, with the age-prediction system following over time. For families, this is an opportunity to add an extra layer of protection – but not an excuse to hand off responsibility to an algorithm.
[Image credit: AI-generated via DALL·E]