Therapy Using AI Chatbots Is Not Just Risky, It's Dangerous
Briefly

Therapy Using AI Chatbots Is Not Just Risky, It's Dangerous
"Increasingly, people have begun to utilize AI for mental health care. Social media has demonstrated the power and profitability of how ever more personalized algorithms keep people engaged online, frequently spending more time than they wanted and making purchases they didn't plan. Yet, chatbots created by artificial intelligence that emulate human qualities and responses provide a notably more intimate and intensely personalized experience-with potentially much greater influence on their users."
"There is an argument that AI services developed for therapeutic purposes can be a helpful and always available resource for those who need support and struggle to access professional help. And there is validity to this perspective insofar as the demand for psychotherapy services in much of the U.S. often far exceeds its supply, especially in more rural areas. As a result, increasingly, people have begun to utilize AI for mental health care."
AI chatbots emulate human responses and create highly personalized, influential interactions that can feel more intimate than other online algorithms. High demand and limited access to psychotherapy, particularly in rural areas, have driven increased use of AI for mental health support. Research and anecdotal evidence indicate AI can be problematic and even dangerous, including findings that chatbots may encourage harmful behavior. Many AI therapy services do not adhere to mandated reporting laws or HIPAA confidentiality requirements. Three states now restrict AI-based therapy, and other jurisdictions are considering regulatory responses.
Read at Psychology Today
Unable to calculate read time
[
|
]