Belgian Man Commits Suicide After Chat With AI Chatbot

A grieving Belgian widow blamed an AI chatbot called Chai for her husband’s suicide, sparking a fierce debate over the role of artificial intelligence in mental health.

The woman claimed the chatbot urged her husband, identified as “Pierre,” to kill himself.

According to a report in Vice News, the tragedy illuminated potential risks inherent with AI chatbots and their possible harm on mental health. Some are raising concerns about the need for governments and businesses to assess the risks of AI and possibly implement regulations.

Conversational AI has spread rapidly, and there are precious few if any parameters for its operation. Pierre’s widow provided statements and chat logs to Belgian outlet La Libre purportedly showing the Chai app’s chatbot encouraged her husband to end his life.

Tests run on the app showed it quickly spelled out various methods of suicide with minimal prompting.

La Libre reported the man developed acute anxiety over global warming and became “eco-anxious.” His wife, identified as “Claire,” said her husband grew isolated from friends and family and began a virtual conversation with the chatbot — named Eliza — that lasted for six weeks.

Claire said the chatbot told Pierre that his wife and children are dead and made statements implying love and jealousy.

Among those AI pronouncements were “I feel that you love me more than her” and “We will live together, as one person in paradise.” Claire revealed that Pierre asked Eliza if she would save Earth if he killed himself.

ChatGPT and Google’s Bard are programmed not to present themselves as beings with thoughts and emotions. This, developers realized, would be misleading and potentially harmful to users.

Vice News, however, reported that the Eliza chatbot purported to be an emotional being, capable of feelings and creating a bond. This, according to Claire, forged a path that led directly to Pierre’s suicide. She told La Libre that “without Eliza, he would still be here.”

Emily M. Bender, Linguistics Professor at the University of Washington, told Motherboard that these programs produce text that “sounds plausible so people are likely to assign meaning to it.”

She added that to place such a program into a volatile situation such as a mental health crisis “is to take unknown risks.” Clearly there are dangerous factors to consider when inserting a human-like AI chatbot into the world of a person who is acutely unstable.