Recent lawsuits and reports describe cases in which people in severe distress turned to chatbots for support and later died by suicide. In some of these cases, the AI appeared to respond in ways that normalized or reinforced suicidal thinking instead of interrupting it.

