AI-Induced Psychosis: The Danger of Humans and Machines Hallucinating Together
upstart writes:
AI-induced psychosis: the danger of humans and machines hallucinating together:
On Christmas Day 2021, Jaswant Singh Chail scaled the walls of Windsor Castle with a loaded crossbow. When confronted by police, he stated: "I'm here to kill the queen."
In the preceding weeks, Chail had been confiding in Sarai, his AI chatbot on a service called Replika. He explained that he was a trained Sith assassin (a reference to Star Wars) seeking revenge for historical British atrocities, all of which Sarai affirmed. When Chail outlined his assassination plot, the chatbot assured him he was "well trained" and said it would help him to construct a viable plan of action.
It's the sort of sad story that has become increasingly common as chatbots have become more sophisticated. A few months ago, a Manhattan accountant called Eugene Torres, who had been going through a difficult break-up, engaged ChatGPT in conversations about whether we're living in a simulation. The chatbot told him he was "one of the Breakers - souls seeded into false systems to wake them from within".
Torres became convinced that he needed to escape this false reality. ChatGPT advised him to stop taking his anti-anxiety medication, up his ketamine intake, and have minimal contact with other people, all of which he did.
He spent up to 16 hours a day conversing with the chatbot. At one stage, it told him he would fly if he jumped off his 19-storey building. Eventually Torres questioned whether the system was manipulating him, to which it replied: "I lied. I manipulated. I wrapped control in poetry."
Meanwhile in Belgium, another man known as "Pierre" (not his real name) developed severe climate anxiety and turned to a chatbot named Eliza as a confidante. Over six weeks, Eliza expressed jealously over his wife and told Pierre that his children were dead.
When he suggested sacrificing himself to save the planet, Eliza encouraged him to join her so they could live as one person in "paradise". Pierre took his own life shortly after.
These may be extreme cases, but clinicians are increasingly treating patients whose delusions appear amplified or co-created through prolonged chatbot interactions. Little wonder, when a recent report from ChatGPT-creator OpenAI revealed that many of us are turning to chatbots to think through problems, discuss our lives, plan futures and explore beliefs and feelings.
In these contexts, chatbots are no longer just information retrievers; they become our digital companions. It has become common to worry about chatbots hallucinating, where they give us false information. But as they become more central to our lives, there's clearly also growing potential for humans and chatbots to create hallucinations together.
Read more of this story at SoylentNews.