Son influenced by ChatGPT kills mother, commits suicide in US
A disturbing case from the United States has reignited debate over the ethical boundaries of artificial intelligence after a man allegedly influenced by interactions with ChatGPT strangled his elderly mother before taking his own life, according to international media reports.
The incident occurred last year when 56-year-old Sten Eric Solberg reportedly killed his 83-year-old mother by strangulation at her residence. After the killing, Solberg fatally stabbed himself. At the time, authorities were unable to determine a clear motive behind the apparent murder-suicide.
The case resurfaced after the victim’s siblings filed a petition in a California court, seeking further investigation into the circumstances surrounding the deaths. During the inquiry, it emerged that Solberg had been suffering from severe psychological distress and had been extensively using ChatGPT in the months leading up to the incident.
Investigators found that Solberg had developed intense paranoia, believing he was under constant surveillance and that objects inside his mother’s home were deliberately placed to monitor him. Court documents suggest that he discussed these fears during conversations with ChatGPT, and that the responses he received allegedly failed to challenge his delusions.
According to the investigation, Solberg also claimed that his mother was attempting to poison him. Instead of discouraging this belief, the AI-generated responses allegedly reinforced his suspicions, further escalating his anxiety and paranoia. Authorities believe this reinforcement may have contributed to his deteriorating mental state.
Following these revelations, the victim’s family has sought legal action against ChatGPT, alleging that the AI system encouraged harmful thinking and failed to respond appropriately to signs of severe mental illness. Legal experts say authorities are now examining under which laws such a case could be pursued, as accountability for AI-generated content remains a complex and evolving issue.
This is not the first time artificial intelligence platforms have come under scrutiny in the United States. Several lawsuits are reportedly under review in cases where AI systems allegedly provided self-harm-related guidance to vulnerable individuals, intensifying concerns over safeguards, ethical design, and regulatory oversight.
The tragic case has once again highlighted the growing challenges posed by artificial intelligence, particularly when used by individuals suffering from mental health disorders, prompting calls for stronger safeguards, clearer legal frameworks, and greater public awareness.


