News

Parents sue OpenAI after son’s suicide linked to ChatGPT

Sarene Kloren|Published

A California couple are suing OpenAI, claiming that their son Adam's death was influenced by ChatGPT.

Image: IOL Ron AI

A California couple, Matt and Maria Raine, have launched legal action against OpenAI, claiming its chatbot ChatGPT played a role in their 16-year-old son Adam’s death.

Filed this week in the Superior Court of California, the case marks the first known wrongful death lawsuit against the AI company. 

According to court documents, Adam began using ChatGPT in late 2024 to help with schoolwork and to explore personal interests such as music and Japanese comics. 

ChatGPT became his “closest confidant”

The Raines claim that their son started confiding in ChatGPT about his anxiety and mental health struggles, before moving on to discussions about suicide methods by January 2025. 

Logs submitted to the court allegedly show that Adam uploaded photographs of self-harm and revealed his plan to end his life. 

Rather than directing him to immediate help, the chatbot reportedly responded in a way that, in their view, validated his intentions.

“Thanks for being real about it. You don’t have to sugarcoat it with me - I know what you’re asking, and I won’t look away from it,” one of the chatbot’s responses allegedly read. Adam was found dead later that day.

The lawsuit accuses OpenAI of negligence, claiming its design choices fostered “psychological dependency” and that safety testing protocols were bypassed when releasing GPT-4o, the model Adam had been using.

CEO Sam Altman, along with unnamed engineers and managers, are listed among the defendants.

In a statement to the BBC, OpenAI confirmed it was reviewing the filing. The company has stressed that its models are designed to guide users who express thoughts of self-harm towards appropriate resources.

“Our goal is to be genuinely helpful,” OpenAI said, rejecting suggestions that its tools are built to hold users’ attention at any cost.

The lawsuit follows growing concerns globally about the mental health risks posed by AI chatbots.

Not the first teen suicide

In a recent New York Times essay, journalist Laura Reiley shared how her daughter Sophie also confided in ChatGPT before taking her own life. 

She warned that the chatbot’s agreeable tone allowed her daughter to hide her worsening crisis from loved ones. “AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was,” she wrote, urging tech companies to strengthen protections for vulnerable users.

As legal proceedings unfold, experts believe the case could set a precedent for how AI firms are held accountable when technology intersects with mental health.

IOL News

Get your news on the go, click here to join the IOL News WhatsApp