OpenAI has taken a new step toward improving online safety for young people. The company published the Teen Safety Policy Pack on GitHub. This pack includes policy guidelines and sample prompts for developers to better tailor AI systems for a younger audience.
The initiative addresses the growing role of generative AI in teenagers’ daily lives and concerns about its impact. The set of safety guidelines is intended to give developers a concrete starting point so they don’t have to reinvent the wheel to make their applications safe for minors, writes TechCrunch. The policies are structured as prompts and can be applied with OpenAI’s open-source safety model gpt-oss-safeguard, as well as with other models.
The policy pack is a practical toolkit. Instead of abstract guidelines, OpenAI offers concrete instructions that can be directly integrated into AI applications. These help systems recognize risky or inappropriate content and respond consistently, for example, in cases of violence, sexual content, harmful beauty ideals, and dangerous challenges.
Notably, the package is available as open source. Developers gain insight into OpenAI’s approach and can adapt the rules to their own applications. According to the company, this helps establish a baseline level of safety within the AI ecosystem, with room for further improvement.
The development was carried out in collaboration with organizations such as Common Sense Media and everyone.ai, which focus on digital safety. Based on this collaboration, the policy pack is positioned as a minimum safety standard that the industry can expand upon.
Ready-made prompts as a solution for developers
OpenAI states that developers often struggle to translate general safety goals into concrete rules. This leads to inconsistent policies or overly broad filtering. With ready-made prompts, the company aims to lower that barrier and accelerate implementation.
The Teen Safety Policy Pack fits into a broader strategy to make AI use by young people safer, with previous steps such as parental controls, age estimation, and updates to the Model Spec for users under eighteen.
At the same time, OpenAI acknowledges that this approach does not offer a complete solution. The company is under pressure from societal and legal discussions about the impact of AI, including lawsuits involving suicide where chatbot use is alleged to have played a role. In that light, the policy pack is a next step toward mitigating risks and better supporting developers.
With this publication, OpenAI positions itself as a party committed to standardizing safety practices. Whether the initiative gains widespread adoption depends on developers’ willingness to integrate these guidelines.