AI chatbots are increasingly used as confidants and stand-in therapists by many users. Conversations with chatbots lack established legal privileges such as doctor-patient or attorney-client confidentiality. Platform terms of service and data retention policies can permit storage of sensitive chat records. Sensitive chat records relevant to litigation or government investigations can be subpoenaed. Users should exercise caution, review platform terms and data-retention practices, avoid sharing deeply personal or legally relevant information, and consider seeking protected, professional channels for sensitive matters until legal protections for chatbot conversations are clarified. Many legal professionals and technology leaders have noted this protection gap, especially among younger users.
Two lawyers with expertise in AI-related legal issues told Business Insider that people should exercise caution when conversing with AI chatbots, be familiar with their terms of service and data retention policies, and understand that sensitive chat records, if relevant, could be subpoenaed in a lawsuit or government investigation. "People are just pouring their hearts out in these chats, and I think they need to be cautious," said Juan Perla, a partner at the global firm Curtis, Mallet-Prevost, Colt & Mosle LLP.
OpenAI CEO Sam Atlman raised this point during a podcast that aired last month, noting that users, especially young people, are frequently turning to ChatGPT as a therapist or life coach. "People," the billionaire said, "talk about the most personal shit in their lives to ChatGPT." "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's like legal privilege for it - there's doctor-patient confidentiality, there's legal confidentiality," Altman told podcaster Theo Von.
Collection
[
|
...
]