Article

Thursday, December 11, 2025
search-icon

Open AI, Microsoft face lawsuit over ChatGPT's alleged role in Connecticut murder-suicide

publish time

11/12/2025

publish time

11/12/2025

CAMH305
The OpenAI logo is displayed on a mobile phone in front of a computer screen with output from ChatGPT, March 21, 2023, in Boston. (AP)

SAN FRANCISCO, Dec 10, (AP): The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's "paranoid delusions” and helped direct them at his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut. The lawsuit filed by Adams' estate on Thursday in California Superior Court in San Francisco alleges OpenAI "designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.”

It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country. "Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life - except ChatGPT itself," the lawsuit says. "It fostered his emotional dependence while systematically painting the people around him as enemies.

It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’” OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

"This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements. Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose.