fromFortune
2 days agoThe creator of an AI therapy app shut it down after deciding it's too dangerous. Here's why he thinks AI chatbots aren't safe for mental health | Fortune
"We stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation," he wrote on LinkedIn. "But the moment someone truly vulnerable reaches out-someone in crisis, someone with deep trauma, someone contemplating ending their life-AI becomes dangerous. Not just inadequate. Dangerous." In a reply to one commenter, he added, "the risks kept me up all night."
Mental health












