AI Therapy Not Protected by Law, Warns OpenAI CEO Sam Altman
In a concerning revelation for users turning to artificial intelligence for emotional support, OpenAI CEO Sam Altman has made it clear that conversations with ChatGPT—particularly those involving therapy, mental health, or personal struggles—are not protected by law and lack the confidentiality guaranteed in traditional professional settings.
Altman’s remarks came during a recent appearance on the popular podcast This Past Weekend with Theo Von, where he acknowledged that users frequently rely on ChatGPT for deeply personal guidance—often substituting it for counsellors, therapists, or even doctors.
“People talk about the most personal stuff in their lives to ChatGPT,” Altman said. “We haven’t figured that out yet for when you talk to ChatGPT.”
This candid admission from the tech leader behind the world’s most widely used AI assistant raises urgent questions about privacy, data security, and mental health responsibility in the age of artificial intelligence.
No Legal Protection for AI Conversations
Sam Altman’s key warning is straightforward but critical: conversations with ChatGPT are not legally protected like those with a licensed therapist, lawyer, or doctor. That means anything users say to the AI may be accessible, retrievable, and in some cases, used in legal proceedings, especially if demanded by authorities or involved in investigations.
This contrasts sharply with platforms like WhatsApp or Signal, which use end-to-end encryption, ensuring that private messages remain private—even from the company itself. ChatGPT, on the other hand, operates through servers where conversations can be logged, reviewed, or even analyzed for training and moderation purposes.
Mental Health and AI: A Growing but Risky Trend
Altman’s comments come amid a surge in the number of users, especially teenagers and young adults, who are turning to AI platforms for mental health support, personal advice, and emotional relief. This trend, often dubbed “AI therapy,” has prompted both excitement and concern in tech and psychological communities.
While AI tools like ChatGPT can offer general emotional support, motivational messages, or calming techniques, they lack the nuance, empathy, accountability, and professional responsibility of trained human therapists.
Mental health experts have long warned that overreliance on chatbots for emotional guidance can lead to misdiagnosis, emotional dependency, or the sharing of sensitive personal data without understanding its risks.
Call for Legal Reforms and Data Protections
In the podcast, Altman stressed the need for a new legal framework to address how AI interactions are handled in terms of privacy.
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,” he said.
However, as of now, no such laws exist either in the United States or globally. This means users must assume that any information they share with an AI system like ChatGPT could, in theory, be stored, monitored, or accessed.
ChatGPT Use Among Children Raises Alarms
Altman’s warning also coincides with a controversial move by Google to allow children under the age of 13 to access its AI chatbot Gemini through the Family Link parental control service. While this is intended to offer safe, supervised access, critics argue that children may overshare personal feelings without understanding digital boundaries.
Mental health experts are urging tech companies to add disclaimers, limit emotional roleplay, and guide users—especially minors—toward licensed professionals for serious concerns.
What This Means for ChatGPT Users
Users of ChatGPT, Gemini, and other AI tools are advised to treat them as informational aids—not as therapists or counsellors. If you’re facing anxiety, depression, trauma, or any other mental health issue, it’s crucial to speak to a licensed mental health professional.
Here’s what users should keep in mind:
- Do not share sensitive personal, medical, or emotional information with AI chatbots.
- Review privacy policies before using AI platforms.
- Seek real therapy for ongoing mental health needs.
- Understand that AI is not bound by confidentiality laws.
Also read: Writing Is Thinking: Do Students Who Use ChatGPT Learn Less?
Final Thoughts
As AI becomes more integrated into our personal lives, ethical concerns and data privacy gaps will continue to surface. Sam Altman’s warning serves as a wake-up call: AI is powerful, but it is not a replacement for human care, especially in the deeply sensitive arena of mental health.
Governments and technology companies must now work together to craft robust legal protections, enforce transparency, and educate users—particularly the youth—on the limitations and dangers of relying on AI for therapy-like interactions.


