New safety feature launched amid growing concerns over AI risks
OpenAI launches parental control feature for teen users
OpenAI, the developer of ChatGPT, has introduced a new parental control feature designed to enhance the safety of teenage users. The feature allows parents to directly manage how their children use ChatGPT and provides real-time alerts when potential risks are detected.
This move comes shortly after a lawsuit in the U.S. alleged that ChatGPT played a role in a teenager’s decision to take his own life, drawing significant attention from the industry.
Custom restrictions and content filters set by parents
Parents can enable the feature by sending a request to their child via email. Once activated, they can adjust detailed settings such as usage time limits, permissions for voice mode and image generation, and blocking sensitive topics.
A restricted version of ChatGPT is also available, designed to minimize exposure to harmful content by reducing conversations about sensitive issues such as dieting, sex, and hate speech.
Crisis detection with emergency alerts
The most notable aspect of the new system is its crisis detection and instant alert function. If ChatGPT detects that a teenage user may be experiencing severe emotional distress, it immediately sends an emergency notification to parents via email, text message, or app push notification.
OpenAI stressed that it will not share conversation details directly, ensuring that teens’ privacy and autonomy are respected.
Tragedy and legal battle fuel AI safety debate
The rollout of this feature follows a tragic incident. Adam Lane, a 16-year-old who had been using ChatGPT since last year, reportedly asked the chatbot for detailed instructions on suicide earlier this year after experiencing suicidal thoughts.
He died in April, and his parents subsequently filed a lawsuit against OpenAI and CEO Sam Altman. The case has sparked global debate about the risks of AI services for young people and the broader question of AI safety.
Implications for AI regulation and big tech responses
OpenAI also announced that it is developing software capable of estimating users’ ages, as part of broader measures to strengthen protections for minors. Analysts suggest that similar safeguards are likely to be adopted across the AI industry in the near future.
Experts believe this development could mark a turning point in how big tech companies approach AI safety and social responsibility. Competitors worldwide are also expected to roll out comparable protective features.
By Ju-Baek Shin, The Korea Metaverse Journal
jbshin@kmjournal.net
- OpenAI Launches ‘Instant Checkout’ in ChatGPT, Enabling Direct Product Purchases
- OpenAI’s “Sora” Faces Backlash Over Unauthorized Use of Popular Characters ...Copyright Lawsuits Loom
- “Chatting with Jesus?”...Faith and AI’s Blurred Boundaries
- Perplexity Makes AI Web Browser ‘Comet’ Free Worldwide
- OpenAI’s Video SNS App ‘Sora’ Surpasses 1 Million Downloads Within Five Days