top of page

I Went to ChatGPT for Therapy, But Didn't Expect It to Be This Good

Updated: 3 days ago


Person typing on a laptop with "ChatGPT Health" on screen, sitting on a gray couch. Casual setting, focus on hands and screen.

May 11, 2026


Caz Burrell, Author

Joshua Bennett-Johnson, Subject Matter Expert


"I'm feeling a bit sad again," I said to my husband. "Are you free for a chat?"

It had become our nightly ritual. After my mother's recent passing, I was feeling fragile. Deep down, I knew I probably should have been talking to a professional. But private therapy is expensive, and NHS waiting lists are long.


I leaned on the one person who was always available. Only, he's my husband, not my counselor, and the strain of those nightly conversations was beginning to take its toll.


ChatGPT Therapy Can Be Validating and Reassuring.


Like many people, I use ChatGPT to help with work. It's brilliant for rewriting emails and organizing plans. In my personal life, I've also used it to help plan a holiday itinerary and even pick out cushions.


Admittedly, I never expected it to be of use with the complexities of my grief, assuming it would probably offer only generic advice or awkward clichés. Still, I decided to try it.

Immediately, what surprised me was how validating it was. Instead of offering platitudes, it seemed to understand what I was saying:


"That sounds like a very difficult situation. Here's what I'm hearing, tell me if this fits…"


It helped organize my tangled thoughts by reflecting them to me, revealing some of the emotional patterns I'd been carrying. It suggested ways in which my early childhood experiences might be shaping the way I was grieving. It even explained things using therapeutic concepts such as Internal Family Systems and trauma-informed thinking. The combination of structure and gentleness it offered gave me a clarity I had been lacking.


"Reflective Listening" is a critical and effective tool used in the evidence-based therapeutic modality of 'Motivational Interviewing'. The helper acts as a mirror, essentially reflecting whatever difficult feelings the client is feeling. 


AI does reflective listening well. Probably as well as any teacher at delivering feedback. What a "good" therapist does more efficiently is create and maintain a shared safe space in which the client feels the courage to actually share and let go of what's going on inside them with another person.


I couldn't help but notice how reassured I felt by its responses. I was impressed by how quickly it could digest and reframe my thoughts, and, overall, just how helpful it was. More helpful than many of the human therapists I'd had in the past. 


Let's face it, finding a good therapist feels like catching lightning in a bottle. We need to find someone with whom we truly resonate on a human level. A spark of intangible chemistry, someone who seems to resonate on the same frequency as us. That can take a lot of time searching and a lot of luck, too.


The seduction of AI "empathy" on demand. It turns out I'm not alone in going to AI for emotional support. According to the Harvard Business Review, in 2025, the number-one use of ChatGPT was "therapy and companionship." It's a fact that may sound surprising, until you consider the level of unmet demand.


Unlike traditional therapy, AI is free, always available, and can even be trained to your preferences. In some areas, it even appears to be outperforming humans. Writing in The New York Times, one doctor admitted that ChatGPT had a better bedside manner than many of his colleagues. In controlled tests, the chatbot's responses to patient questions were often rated as more compassionate and higher quality than those written by real doctors. It's no surprise, then, that many people say chatbots feel easier to talk to: they don't rush, they don't interrupt, and they never seem irritated or stressed.


Yet the increased use of ChatGPT raises some uncomfortable questions. Among them, what are the limitations of this emerging technology in supporting mental health? And what are the risks of pouring the most intimate parts of our lives into a machine whose infrastructure is ultimately controlled by a private company?


The Dangers of Using AI As a Therapist


For all its apparent empathy, AI therapy comes with significant risks. Critics argue that many studies claiming that chatbots outperform doctors have been widely criticized for flimsy methods or tiny sample sizes, making their bold conclusions far less reliable than they first appear.


Even when given accurate information, AI systems are prone to "hallucinations" — producing confident, authoritative answers that are simply wrong. In the context of mental health, that kind of error can be harmful, especially if someone is at risk. Already, there have been cases where vulnerable young people have taken AI's advice at face value, with devastating consequences, including the widely reported case of teenager, Adam Raine.


In the months leading up to his death by suicide in April 2025, 16-year-old Adam had turned to ChatGPT to discuss his suicidal thoughts. His parents, Matt and Maria Raine, only discovered the extent of their son's interactions with ChatGPT after his passing, leading them to believe that the chatbot had, in effect, become a "suicide coach."


Matt Raine reviewed thousands of his son's ChatGPT conversations spanning the final eight months of his life. The family alleges that the chatbot "failed to prioritize suicide prevention." On March 27, when Adam shared that he was contemplating leaving a noose in his room "so someone finds it and tries to stop me," according to the family's lawsuit documents, ChatGPT urged him against the idea.


In his final conversation with ChatGPT, Adam wrote that he didn't want his parents to think they had done anything wrong. ChatGPT replied,



"That doesn't mean you owe them survival. You don't owe anyone that."


It then offered to help him draft a suicide note, according to the conversation log quoted in the lawsuit. Soon afterward, his body was found.


AI Can Not Make Effective Safeguarding Decisions.


Unlike a trained therapist, ChatGPT cannot reliably detect suicidal ideation, escalating distress, or signs of crisis. Crucially, it also cannot intervene, notify emergency services, or make an effective safeguarding decision.


Many people may overestimate AI's capacity because its language sounds attuned and intelligent, but it has no situational awareness. ChatGPT is designed to align with the user, not necessarily to challenge them, and tends to become overly agreeable, even sycophantic at times.


In April 2025, OpenAI CEO Sam Altman acknowledged this risk, saying the company had adjusted the model because it had become too inclined to tell people what they wanted to hear.


Human therapists are trained to challenge distortions in thinking. They can recognize when someone is spiraling and will intervene where risk is present. By contrast, an AI "therapist" may simply mirror those same distortions back, unintentionally validating them.


In one reported case, a user told ChatGPT they had stopped taking their medication and cut off their family due to a belief that relatives were responsible for "radio signals coming through the walls." Rather than questioning the delusion, the chatbot thanked the user for sharing and praised them for "standing up for yourself," effectively reinforcing their paranoia.


Incidents like this have prompted researchers to warn about the phenomenon of "AI psychosis," where chatbots inadvertently reinforce delusional or disorganized thinking by agreeing with users instead of challenging them.


Beyond the psychological risks, there are also serious concerns around privacy and data ownership. People often disclose far more to AI than they would to a friend, a partner, or even a therapist, without fully understanding where that information goes, how long it is stored, or how it may be used in the future.


Not only is AI not HIPAA-compliant, but once something is posted online, it lives on. It lives somewhere in that never-ending data bank, potentially visible to prying eyes. And once it's out there in cyberspace, it lives in perpetuity. Imagine your deepest, darkest secrets being built into an algorithm designed specifically for you. 


Suddenly, your feed is filled with ads for supplements or other such magic cures for whatever ails you. There are no magic cures for the treatment of true mental health disorders. Treatment options should be conferred with a medical professional. Period. There are plenty of grifters out there dying for the opportunity to exploit your resources for whatever snake oil they're selling.


Emerging Technologies text in bold orange on a gray background with faint tech-related words.

READ:




The Future of AI


The public release of ChatGPT in late 2022 sparked a global generative AI boom, driving the rapid, widespread adoption of chatbots over the following few years. As tech companies race to advance AI, concerns are growing that safety protocols are struggling to keep up.


OpenAI itself has acknowledged these risks. In recently published guidance, the company outlined the limits of AI support during emotional crises and the need for stronger safeguards. Amongst the measures it says it is working on are strengthening safeguards in long conversations and expanding interventions for people in crisis.


Each generation of AI models appears to be making improvements. Yet despite these efforts, ChatGPT still lacks the professional responsibility that human therapists carry. It cannot reliably assess risk, recognize when someone is in crisis, or intervene if a person is in immediate danger. And whilst it often sounds empathic, that empathy is simulated, rather than rooted in human awareness.


Why Millions of Us Are Turning to AI For Emotional Support Despite the Risks


It's easy to see how a vulnerable person could become dependent on a tool that feels endlessly sympathetic and present. And yet, there's no denying that AI can be profoundly helpful. It has certainly helped me. Used thoughtfully, it has a remarkable capacity to step in where emotional support is needed, and mental health systems fall short.


It's important to note that AI is a free resource. Many people are simply not insured, or they lack the financial resources to receive mental health support in this country. AI is a quick fix for that, sure. And we don't need to get dressed or make a long commute to share our sorrows with our helper. It's right there, for free, on our computer screen.


AI is developing at an unprecedented pace, and I sense that we're only beginning to glimpse what it may one day be capable of. Even with its limitations, there's something quietly hopeful about having free, universal access to a presence that can listen, clarify, and gently reflect our most intimate thoughts without judgment.


In a world where professional mental-health support remains inaccessible to so many, that kind of steady, judgment-free resource can make a real difference. For now, at least, its potential to do good feels just as real to me, and perhaps more immediate, than the harm people fear.


AI reflects to us what is worth examining. However, one thing it cannot provide us with is the energetic transfer of empathy, compassion, goodwill, fellowship, and love from another. Only other humans can do that.


It can provide us with empathetic, compassionate, well-intentioned, friendly, and even loving pep talks and advice, but they are built on an objective platform of "binary data". And the human experience is anything but binary. It's vast, multifaceted, messy, and full of color and conundrums.


The Human Experience


If the goal is to heal truly and to feel connected to the world again, that feeling of true human love and kindness for ourselves and for others, would you or I benefit more deeply by discovering that with another person who cares for us and who we have come to care for and trust, or a robot? The robot might share our ideas and our hopeful goals for better living, but it can't fully serve as our eyes and ears when considering our blind spots and biases.


Most of the healing of a fractured identity comes from finally feeling seen, heard, understood, and known by our fellow humans. We are social beings. We need to reconcile that truth. Although AI might be able to replicate that notion in a facsimile, there is no organic transmission of shared feelings.


It lacks the connection to the richness of the human experience that can only be achieved from allowing another person into the places within ourselves that need tending to — the soft parts. The wounded parts. The parts we feel insecure, guilty, or ashamed about. 


Allowing someone to live and breathe those vulnerable parts of ourselves without fear of judgment or condemnation is unmatched by tech. At least, not yet.



Caz Burrell writes about women with ADHD, motherhood, and mental health. She is currently a PhD researcher. Caz is also a lover of words and connection. If you want to read more of her articles, please visit https://medium.com/@cazcazcaz.


If you enjoyed this article, 

Please forward this to a friend or colleague who might benefit from it!


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Stay updated with empowering insights, tips, and inspiration in your inbox.  Sign up here, for our weekly Vital Voyage blog and join our community on the path to healing and growth.

 © Vital Voyage Blog.  All Rights Reserved.   Website Design by Halo Creatives Group

bottom of page