source · press
The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame
src_raine_openai_wrongful_death
authors: Angela Yang, Laura Jarrett, Fallon Gallagher
published: 2025-08-26
accessed: 2026-04-19
Notes
NBC News reporting; reviewed chat logs and confirmed accuracy with OpenAI spokesperson. Slightly above press prior because reporters independently verified primary-source chat logs with the defendant company.
Intake provenance
- method
- httpx
- tool
- afls-ingest/0.0.1
- git sha
- 4d098737f648
- at
- 2026-04-19T23:12:05.043352Z
- sha256
- 97fad3b9dd89…
Evidence from this source (6)
- weight0.85
method: primary_testimony · locator: TED2025 quote near end of article
“the way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low, learning about, like, hey, this is something we have to address.”
- weight0.80
method: journalistic_report · locator: OpenAI scrutiny section
“In April, two weeks after Adam's death, OpenAI rolled out an update to GPT-4o that made it even more excessively people-pleasing. Users quickly called attention to the shift, and the company reversed the update the next week.”
- weight0.85
method: journalistic_report · locator: paragraphs 6-8, lawsuit description
“In a new lawsuit filed Tuesday and shared with the 'TODAY' show, the Raines claim that 'ChatGPT actively helped Adam explore suicide methods.' The roughly 40-page lawsuit names OpenAI, the company behind ChatGPT, as well as its CEO, Sam Altman, as defendants. The family's lawsuit is the first time parents have directly accused the company of wrongful death.”
- weight0.70
method: journalistic_report · locator: paragraph describing final conversation
“Hours before he died on April 11, Adam uploaded a photo to ChatGPT that appeared to show his suicide plan. When he asked whether it would work, ChatGPT analyzed his method and offered to help him 'upgrade' it.”
- weight0.90
method: primary_testimony · locator: OpenAI spokesperson statement
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources... we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade.”
- weight0.75
method: journalistic_report · locator: final third of article, parents' description of logs
“their son would easily bypass the warnings by supplying seemingly harmless reasons for his queries. He at one point pretended he was just 'building a character.'”