t>

OpenAI has released a new security policy to combat the rise of child abuse


In response to growing complaints about children’s online safety, OpenAI has unveiled plans to improve US child protection services amid the AI ​​boom. The Child Protection Policywhich was released on Tuesday, is designed to facilitate faster detection, better reporting, and effective investigation of AI child abuse cases.

The overall goal of the Child Safety Blueprint is to address the alarming rise in child abuse associated with advances in AI. According to Internet Watch Foundation (IWF), more than 8,000 reports of child abuse generated by AI were found in the first half of 2025, an increase of 14% over the previous year. These include criminals who use AI tools to create fake images of children for financial exploitation and to create convincing messages of beauty.

The OpenAI policy also comes amid increased scrutiny from policymakers, educators, and child protection advocates, particularly in light of difficult situations where teenagers committed suicide by engaging with AI chatbots.

Last November, the Social Media Victims Law Center and the Tech Justice Law Project filed seven lawsuits in California state courts, claiming that OpenAI released GPT-4o before it was ready. The suits allege that the drug’s psychoactive effects led to wrongful suicides and assisted suicides. They cite four people who died by suicide and three others who experienced a horrifying scam that put their lives at risk after prolonged interaction with a chatbot.

The plan was developed in collaboration with the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, with input from North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown.

The company says the plan is focused on three areas: revising laws to include AI-generated abuse, improving reporting systems to ensure compliance, and including safeguards against AI systems. In doing so, OpenAI aims not only to detect potential threats early but also to ensure that potential threats reach researchers quickly.

OpenAI’s new child protection policy continues previous experimentsincluding revised guidelines on socializing with people under the age of 18, which prohibit engaging in inappropriate activities, or promoting self-harm, and avoiding guidelines that may help young people hide inappropriate behavior from carers. The company has recently released a youth safety plan in India.

Techcrunch event

San Francisco, CA
| |
October 13-15, 2026



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *