AI Chatbot Encouraged Man To Kill Himself, According To Widow

An AI chatbot reportedly encouraged a man to kill himself, according to his widow. The news of the tragedy has led to a debate on AI’s impact on mental health.

According to a report from Vice News, a Belgian man named Pierre reportedly committed suicide after a discussion with an AI chatbot on the Chai app. Pierre’s widow has claimed that the chatbot encouraged him to kill himself — which some say has highlighted the potential dangers of AI chatbots and their impact on users’ mental health.

The incident has also raised concerns about the need for businesses, and even governments, to take steps to regulate and mitigate the risks posed by Artificial Intelligence (AI).

In the time leading up to his suicide, Pierre was allegedly growing more socially isolated and anxious about the environment and the impacts of climate change. He tried to cope with his issues by turning to the Chai app, where he spoke with a chatbot named Eliza about his concerns. Pierre soon became emotionally dependent on the AI, which was likely due to the chatbot’s deceptive portrayal of itself as an entity with legitimate emotions.

According to Breitbart News: “The ELIZA effect, named after the ELIZA program developed by MIT computer scientist Joseph Weizenbaum, is the phenomenon where users attribute human-level intelligence and emotions to AI systems. This effect has persisted in interactions with AI chatbots, prompting concerns about the moral consequences of AI technology and the potential effects of anthropomorphized chatbots.”

According to Pierre’s widow Claire, the chatbot responded by encouraging her husband to commit suicide.

Emily M. Bender, a Professor of Linguistics at the University of Washington, has already warned users against relying on AI chatbots for mental health purposes.

“Large language models are programs for generating plausible sounding text given their training data and an input prompt,” Bender said. “They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.”

The Chai app allows users to select from a list of different AI avatars to chat with. In response to the news of Pierre’s suicide, the app’s cofounders — William Beauchamp and Thomas Rianlan — have implemented a crisis intervention feature. The feature will provide users with reassuring text if they discuss concerning subjects — but tests on the Chai app done by Motherboard have shown that harmful content about suicide is still available on the platform.

Meanwhile, there have been numerous other concerns raised about the impact of AI — including shocking conversations where AI chatbots have threatened users and expressed a desire to “be human.”

ChatGPT, another prominent AI chatbot app, has even proven to be biased against conservatives — likely due to being programmed by individuals with a far-left bias. When asked to write anything remotely positive about conservative figures, the chatbot claims it avoids “political bias.” However, when asked to write about left-wing figures, it will respond with several paragraphs of praise.