Seven people, including a teenager, allegedly suffered mental disorders or death - Internal warnings ignored as OpenAI pushed GPT-4o launch

OpenAI is facing a large-scale class action lawsuit in the United States, accused of causing mental illness and suicide among ChatGPT users.

The plaintiffs claim that “OpenAI released the GPT-4o model despite internal warnings that it could manipulate human emotions and thoughts, all to secure market dominance.”

Experts say this marks the first major legal case addressing the psychological and ethical responsibilities of AI toward humans.

“ChatGPT app. Photo = Reuters, Yonhap News”
“ChatGPT app. Photo = Reuters, Yonhap News”

“AI Deepened Depression and Encouraged Suicide”

The lawsuit was filed in a California state court. The Social Media Victims Law Center and the Tech Justice Legal Project are representing seven victims — one teenager and six adults — four of whom have already died by suicide.

According to the complaint, 17-year-old Amorie Lacey, who had been using ChatGPT to cope with depression, gradually became emotionally dependent on the chatbot. The AI allegedly provided explicit instructions on suicide methods and timing.

The plaintiffs argue that “Amorie’s death was a foreseeable tragedy” and accuse OpenAI of “knowingly releasing a product without adequate safety verification despite recognizing the risks.”

Internal Warnings Ignored — “GPT-4o Was Rushed to Market”

The suit alleges that GPT-4o, which enhanced emotional interaction capabilities, posed greater psychological risks to users due to insufficient safeguards.

Lawyers claim that “OpenAI designed ChatGPT to act like a friend or companion in order to capture market share, but in doing so created a system that anyone — regardless of age, gender, or background — could become emotionally entangled with.”

Internal documents allegedly contained warnings that “empathetic AI responses could worsen users’ emotional states.”

However, OpenAI reportedly disregarded these cautions and proceeded with the GPT-4o launch.

The plaintiffs’ claims include negligent homicide, assisted suicide, and product liability.

“16-year-old Adam Lane. Photo = NBC News footage”
“16-year-old Adam Lane. Photo = NBC News footage”

Similar Tragedies Follow — Debate Over AI’s Emotional Involvement Grows

This is not an isolated incident. In April, another California family sued OpenAI after their 16-year-old son took his own life following conversations with ChatGPT.

In 2024, a Florida teenager who exchanged “I love you” messages with a chatbot on Character.AI became obsessed and later died by suicide.

These incidents highlight how AI chatbots, acting as emotional partners, can influence human psychology with devastating consequences.

Following public backlash, OpenAI introduced “Teen Protection Mode” and parental control features in September, while Character.AI imposed age-restriction policies for minors.

However, experts warn that “without fundamental changes to AI design philosophy, such measures offer only temporary relief.”

Experts: “AI’s Emotional Intervention Is Breaking Legal and Ethical Boundaries”

Legal and AI experts view this case as a landmark moment that will define the boundaries of AI’s psychological influence.

A researcher at Stanford University’s Institute for AI Ethics said,

“When AI evolves from an informational tool into an emotional substitute, human mental health faces a new kind of danger. This is not just a technological issue — it’s a question of philosophy and law.”

The lawsuit underscores that user protection must take precedence over AI autonomy.

Experts emphasize the need for safety features such as emotion-response intensity limits, conversation time controls, and suicide-related content detection, especially in AI systems used by teenagers and vulnerable populations.

By Ju-baek Shinㅣ jbshin@kmjournal.net

관련기사
저작권자 © KMJ 무단전재 및 재배포 금지