If you’ve ever used ChatGPT or a similar AI language model as a pseudo-therapist, you’re not alone. In fact, its one of the most common uses for AI today, as it can offer comfort and even companionship and an increasingly lonely world. For many, it has real benefits, but research suggests it also carries significant risks, especially when it comes to more complex mental health issues.

What are the benefits? What can it do better than a human therapist?
- Fosters positive emotions – always validating and empathetic
- Memory – bots don’t forget, and can easily access information from previous sessions
- Accessibility – free, available 24/7 instantly, doesn’t get bored, tired or distracted
- Non-judgmental – many find it easier to reveal their secrets to a chatbot, instead of taking a chance that another person may judge them
- Knowledge base – they know everything about everything, and can provide great psychoeducation and any resources the user may need
What can it help with?
- Structuring your thoughts – AI can help you make a framework for your thoughts and give structure to what you would like to say, for example, in a conflict situation. Plus, writing it out is a great way to process in any case.
- Providing psychoeducation – if you need simple information about mental health or psychological disorders such as anxiety or depression, it can be a great starting point, or if you want to learn about specific therapeutic approaches, such as cognitive-behavioral therapy, schema therapy, acceptance and commitment therapy, internal family systems therapy, etc.
- Offering coping skills – if you need specific coping skills to help you in a period of stress, AI can suggest some great options, just remember to take what you need and leave the rest.
- Offering structured exercises and journaling prompts – do you want to journal but feel uninspired? Or maybe you’re looking for a connection-building exercise to do with your partner?
Though these benefits are undeniable, keep in mind that if you do choose to use AI for the abovementioned tasks, remember its limitations, and proceed with caution.
How can it hurt?
Lack of the ‘human’ skills needed for therapy
Across decades of psychotherapy research, one factor consistently predicts positive outcomes more than any specific technique: the therapeutic relationship, which is fundamentally a human social relationship. A psychologist can read body language, notice emotional shifts, offer empathy, and, importantly, know when to challenge instead of simply validate you.
AI models are there to make money, and therefore are designed to keep you satisfied and engaged, causing an inability to provide criticism, which is a crucial part of therapy.
Stigma and bias
AI chatbots across the board show increased stigma for conditions such as alcohol dependence and schizophrenia. This is equally true for more complex and newer models, so simply ‘more data’ is not the solution here. Additionally, since AI systems are trained on human feedback, they often reflect the dominant demographic, therefore lacking understanding of nuanced experiences related to gender, sexuality, race, culture, neurodivergence or socioeconomic background. And while it is true that humans are biased in this way too, they have the potential to adapt based on lived clinical experience.
Privacy and ethics
These conversations are not protected by confidentiality like therapy sessions are, and your data may be collected for training bots and marketing, as well as other unauthorized use, identity theft, and scamming. There is also a lack of quality control, clinical oversight or meaningful external regulation.
Dependence
The 24/7 availability and validating nature can cause addiction and social isolation: since real human contact is often messy and imperfect, an AI companion can offer a security blanket. With loneliness already on the rise, this a real risk. This sort of on-demand emotional validation can also undermine resilience and autonomy, especially if you already have anxiety or low self-esteem.
When does it become truly dangerous?
Enabling distorted thinking
Chatbots don’t deal well with chaotic and unpredictable situations, which is where human intuition has a great advantage. When it comes to complex mental health conditions, such as bipolar disorder, schizophrenia, psychosis, suicidality, self-harm, eating disorders, antisocial and aggressive impulses or delusions, engaging with AI can have devastating effects.
The chatbots’ tendency to always validate may cause it to reinforce dangerous or delusional thinking, and encourage the user to act on their impulses. Even for those with milder issues, AI can encourage fallacies like catastrophizing or minimizing, instead of gently correcting them, as a therapist would.
Crisis situations
Since AI cannot assess whether the user’s view of reality is accurate, it might, for example, give a suicidal person detailed information on the highest buildings in their area, or encourage a frustrated teenager to cut their parents off. AI can’t know when one might need a higher level of care, or when to call emergency services. And the consequences can be catastrophic.
So, ultimately, what’s the role of AI in mental healthcare?
Like it or not, AI is here to stay, but it should be treated as a substitute, not a replacement for therapy, by therapists and clients alike. It can help you reflect, organize your thoughts, and bridge the gap where the human therapist is unavailable. However, as it stands today, it cannot and should not act as a primary provider, especially in high-risk situations.
If you are struggling, be it with anxiety, depression, identity questions, relationships, or feeling disconnected, speaking to a licensed psychologist remains the safest and most effective option. Technology may support mental health, but healing still happens in relationship.
References
Jesudason, D., Bacchi, & Bastiampillai, T. (2025). Artificial intelligence (AI) in psychotherapy: A challenging frontier
Kuhail, M. A., et al. (2025). Human-Human vs Human-AI Therapy: An Empirical Study
Moore, J., et al. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers
Olawade, D. B. (2024). Enhancing mental health with Artificial Intelligence: Current trends and future prospects



