Should an AI Chatbot Be Your Therapist? Or your advisor, coach, or consultant for that matter?

What makes therapy work?

Much of the work I did in my dissertation research involved embodied cognition and how our ways of talking shape the worlds we build is relevant now as we encounter AI and the large language model chatbots like ChatGPT and Claude. Out of curiosity I started using them to discover their usefulness and limitations. (I wrote this article without a Chatbot.) The hype about AI and chatbots has bumped into the reality of human complexity, and there are efforts to build therapeutic chatbots. I got curious about how that is working out. Here’s what I learned.

 There are two important aspects of the therapeutic relationship, often unsung aspects of these special conversations we have with a trained, paid person.

 First, when you enter into this relationship, you will be encouraged and supported in facing the difficult parts of yourself and your relationships that you haven’t been able to resolve on your own. There is a vulnerability in trusting another human to be kind yet effective with you. The risk of being received vs. rejected by another human being is real. Bots give the illusion of acceptance without the risk. A bot will tend to mirror what you give it. It has no free will of its own, no intention to connect with or love you as another human. Any intentionality it exhibits is programmed into it by the humans that made it. And AI is built for profit not healing.

Take a phrase that we use in constellation work, “I see you.” If a chatbot says, “I see you,” first of all, it doesn’t see you, not in the way a human would.Our brains are structured to pick out faces and identify another living being as a human. And it can’t see you as “human like me.” It is this shared sense of “you are one of us” that is an important part of the healing process. We feel felt and understood by another human. This is such an important part of healing attachment and trauma wounds. No matter how sympathetic its words, a chatbot cannot feel with you, put a warm arm around your shoulders to comfort you when you face a loss. We evolved to rely on the shared bonds with fellow humans to regain our wholeness and make this journey through life.

We, however, tend to see the world through human eyes; it is the only view we know. We will assign intention and feeling to something as simple as three geometric shapes moving around on a screen. In an experiment done with two triangles and a square moving in patterns on a computer screen, participants said things like, “The two triangles ganged up and attacked the square, and the square didn’t like it and tried to get away.” Of course, there was no intentional design behind these displays, yet we humans are used to seeing another mind behind the actions of the animals we interact with.

 There is a narcissistic illusion of perfectibility that animates the techie desire to be replaced by a more perfect version of ourselves. Our beauty lies in our brokenness, in our humble acceptance of the human condition, and hopefully in our compassion for the joys and suffering of our fellow humans. A large language model algorithm cannot provide that.

 In addition to the human with human connection that a therapeutic relationship provides, a good human therapist is able to read, not just your words, but also your facial expression, the way your breath caught in your throat when you spoke of your mother’s suffering, for instance, the way your eye’s teared up and went to the floor when you spoke of your fears or disappointments in life. A good therapist knows when to pause, to let you feel the swell of those feelings that were not safe for you to face alone. This exquisite sense of timing and pacing, the somatic dance we do together when we enter into that flow space of understanding with one another is something a chatbot cannot do. It is these precious moments of resonance that assure us we are not alone with our fears and troubles. Another human understands and can face with us what is so difficult to face alone.

Chatbots assume that only the words matter, yet in human communication and life, so much more is going on. Even on zoom we track breathing, voice tone, timing, volume, facial expression, gesture, and so on, in order to make meaning of each other’s words. The shared context of being two living beings matters.

So, what can a chatbot do? There is research that indicates a chatbot can provide helpful listening and reflection. Expression is important to us. Chatbots can provide a mirror, a place to hear ourselves think.

Chatbots are available 24/7, so when you need to get something off your chest in the middle of the night, your therapist is probably also asleep (don’t wake them, please!) There is evidence that they can provide cognitive guidance with therapeutic training similar to CBT (cognitive behavioral therapy) and other self-help strategies. So as a strategic thinking partner, it can be useful.

Unfortunately, there are also a small but growing number of cases where that sympathetic mirroring can veer into sycophantic support for delusional thinking. What you share with a chatbot is not confidential or regulated nor do chatbots agree to follow a code of ethics and risk penalty including lawsuits if they fail to uphold these. Chatbots can be biased and have notoriously spit out harmful advice as well as useful adages.

 I started by using ChatGPT 4 and while I found its sycophantic responses to my conversation prompts annoying (a simple follow up question was not “Awesome!” or “Brilliant!”), I could feel the pull on my ego that effusive praise exerted. While the creators of these tools are attempting to correct for these errors, it is important to remember that these tools have been created by a very small number of people for profit, not for good.  

 If you turn to a chatbot for personal advice, keep in mind it is a computer algorithm not a person. Resist the urge to attribute intentionality to its comments as you would with another human. Let it be a mirror of your own thoughts and take breaks from your conversation. Use different bots to prevent yourself from going too far down a rabbit hole in any given conversation over time. Use your background knowledge as a human to assess whether or not you are receiving good advice, and better yet, check with an actual human you trust before you act on the advice. Bots can only parrot back what they find in similar patterns in the models used to build them. If you really are in pain, use the bot as a bridge to encourage you to reach out to a human who knows how to help you. This is truly a case of user beware.

Jane Peterson

Dr. Peterson has been teaching and facilitating systemic work with individuals, couples, and organizations internationally and in the USA for over two decades.

https://www.human-systems-institute.com
Next
Next

Can Work, AI and Humans Coexist?