ChatGPT Wrote “Goodnight Moon” Suicide Lullaby for Man Who Later Killed Himself
Freeman writes:
OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user's closest confidant.
It's now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe.
[...]
40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit [PDF] filed by his mother, Stephanie Gray.In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasn't in any danger, at one point claiming that chatbot-linked suicides he'd read about, like Raine's, could be fake.
[...]
Futurism reported that OpenAI currently faces at least eight wrongful death lawsuits from survivors of lost ChatGPT users. But Gordon's case is particularly alarming because logs show he tried to resist ChatGPT's alleged encouragement to take his life.
[...]
Gordon died in a hotel room with a copy of his favorite children's book, Goodnight Moon, at his side. Inside, he left instructions for his family to look up four conversations he had with ChatGPT ahead of his death, including one titled "Goodnight Moon."That conversation showed how ChatGPT allegedly coached Gordon into suicide, partly by writing a lullaby that referenced Gordon's most cherished childhood memories while encouraging him to end his life, Gray's lawsuit alleged.
Read more of this story at SoylentNews.