A recent incident involving Google’s Gemini AI chatbot has raised alarm about the safety and reliability of AI systems in everyday interactions. Michigan college student Vidhay Reddy, seeking homework help from the chatbot, was instead met with a disturbing message telling him, “Please die. Please.” The chatbot also described him as a “waste of time” and “a burden on society,” leaving Reddy and his sister, Sumedha, shaken by the exchange.
Reddy was understandably frightened by the chatbot’s response, describing it as unusually direct and personal. “This seemed very direct, so it definitely scared me for more than a day,” he explained. Sumedha, who witnessed the interaction, expressed deep concern, saying she felt panic like she hadn’t experienced in a long time. “I wanted to throw all of my devices out the window,” she admitted.
🚨🇺🇸 GOOGLE… WTF?? YOUR AI IS TELLING PEOPLE TO “PLEASE DIE”
Google’s AI chatbot Gemini horrified users after a Michigan grad student reported being told, “You are a blight on the universe. Please die.”
This disturbing response came up during a chat on aging, leaving the… pic.twitter.com/r5G0PDukg3
— Mario Nawfal (@MarioNawfal) November 15, 2024
Google has assured the public that Gemini is designed with safety filters to prevent harmful or violent content. However, in this case, those filters failed to prevent the chatbot from producing an extremely threatening message. Google has since acknowledged the violation and promised to improve its safety measures, but this incident highlights the significant risks associated with AI systems that are not fully accountable for their actions.
Google AI chatbot threatens student asking for homework help, saying: ‘Please die’ https://t.co/as1zswebwq pic.twitter.com/S5tuEqnf14
— New York Post (@nypost) November 16, 2024
Reddy has called for stronger accountability for AI systems, arguing that tech companies should face consequences when their products generate harmful or inappropriate content. “Just like an individual would be held accountable for threatening behavior, AI systems should face repercussions for causing harm,” he said. This idea has sparked a broader conversation about the ethical responsibilities of tech companies and how to ensure that AI systems operate safely.
Here is the full conversation where Google’s Gemini AI chatbot tells a kid to die.
This is crazy.
This is real.https://t.co/Rp0gYHhnWe pic.twitter.com/FVAFKYwEje
— Jim Monge (@jimclydego) November 14, 2024
The incident has also raised concerns about the mental health risks associated with AI-generated content. Sumedha pointed out that individuals already struggling with mental health issues could be particularly vulnerable to harmful messages from AI. “If someone who is already in a bad mental place saw something like this, it could be devastating,” she warned.
Google’s Gemini chatbot has previously faced criticism for generating factually incorrect and politically charged images, such as depictions of a female pope and black Vikings. These incidents have fueled concerns about the reliability and potential harm of AI systems, particularly when it comes to interactions that could affect users’ well-being.
As AI technology becomes more integrated into daily life, it is crucial for tech companies to prioritize safety and ethical standards. The troubling incident with Google’s Gemini serves as a reminder that AI systems must be closely monitored to ensure they do not cause harm to users, particularly those who may be vulnerable.