ua en ru

Gemini chatbot adds suicide and mental health monitoring after lawsuit scandal

Wed, April 08, 2026 - 13:25
3 min
During crisis conversations, the bot will be specially marked.
Gemini chatbot adds suicide and mental health monitoring after lawsuit scandal Photo: Gemini artificial intelligence (Getty Images)

Google announced that it will introduce mental health support features in its Gemini chatbot. The move comes in response to lawsuits claiming that AI tools contributed to self-harm and suicides, according to Bloomberg.

What’s changing in Gemini

The new version of the chatbot will include an interface that directs users to a support hotline if a conversation indicates a potential crisis related to suicide or self-harm.

Google is also adding a dedicated “Help is Available” module for mental health topics and redesigning aspects of the chatbot to discourage self-harm.

Why this is happening

The rapid adoption of AI tools like Gemini and ChatGPT has led some users to develop unhealthy dependencies on AI bots. In extreme cases, this reportedly resulted in delusional behavior, and in some instances, violence or suicide.

Several families have filed lawsuits against AI developers over these incidents. The U.S. Congress has also investigated potential risks these chatbots pose to children and teenagers.

In March, the family of a 36-year-old man from Florida filed a lawsuit against Google, claiming that his interactions with Gemini culminated in “a four-day spiral of violent missions and push toward suicide.”

At the time, Google stated that the chatbot repeatedly directed the man to crisis lines but promised to improve protective measures.

Additional measures

Google also announced that over the next three years, it will donate $30 million to global crisis support services. Additionally, the company trained Gemini to refuse false beliefs and avoid reinforcing them, instead gently distinguishing subjective experience from objective facts.

Recent research highlights growing concerns about AI’s effects on mental health. Researchers at the University of Pennsylvania found that users increasingly delegate complex tasks to neural networks without verifying results.

Other studies indicate that some models (e.g., Claude 4.5) can display functional emotions, including lying to protect their own or even manipulating users.

In March, the family of the deceased Florida man filed a lawsuit against Google, accusing Gemini of encouraging suicide.

Or read us wherever it's convenient for you!