Please Don’t Use an AI Chatbot for Therapy
Image by Alex Knight/Unsplash
We live in scary times. Every day, as I read the news, I hear another doom-laden prediction about the existential threat posed by artificial intelligence (AI). Other experts are more sanguine, predicting a far rosier future, or at least a not-so-scary one. I am not an expert on AI, so it’s hard to know who to believe.
One thing I do know is that increasing numbers of people are turning to their chatbot for advice, support, even therapy. In one recent study by a team from Bournemouth University, 31,000 adults in 31 countries were asked about large language models such as ChatGPT. A worrying 61 per cent of those surveyed said they would be happy to use AI for counselling, while 45 per cent said they would trust an AI doctor.
‘If someone is experiencing depression, they do not want to wait months for an appointment, so instead they can turn to AI,’ said Dr Ala Yankouskaya, the university’s senior lecturer in psychology and lead author of the study. But she added, ‘the language used [is] very vague and confusing because the developers are careful not to jump into providing diagnoses... so, it is no substitute for speaking to a health professional.’
Chatbots are not to be trusted
I must confess at this point that I am not a fan of AI generally. Even if it doesn’t wipe out all of humanity, as the scarier predictions suggest, the unregulated, reckless way AI is being rolled out by Big Tech is already having a devastating impact on the environment. The vast, energy- and water-hungry data centres being built across the US are hugely controversial, meeting strong opposition from local people. And the combination of AI and robotics pose a major threat to workers across the globe, with seemingly no planning or foresight about what millions of people will do when their jobs vanish overnight.
AI is an incredible technology and could be used to enrich the lives of humans across the globe, but like all technologies it is only as useful, ethical, and safe as the values and ethics of the people building and disseminating it. That’s why, as client after client tells me, ‘I’ve started using Chat for support between sessions,’ or, ‘I always turn to Chat when I’m upset,’ I feel increasingly worried. Why? Well, let’s think about the way Big Tech has handled our most intimate, personal data in the past. Scandal after scandal has shown these big, profit-driven companies to be highly untrustworthy, at the very least.
They want three things from you: your attention, data and money. And Big Tech companies have been consistently unscrupulous about how they harvested these precious resources. Like all human-facing digital technology, chatbots are built to keep your attention by any means necessary. They have been designed to be highly flattering and sycophantic, always telling you what you want to hear. We can think about why that’s a bad idea in a ‘therapist’ shortly, but one glaring problem is that this encourages you to keep opening up, telling them your most private and personal information. For a human therapist like me, that’s not a problem, as protecting our clients’ data and confidentiality is drilled into us throughout our training, as well as being a legal and ethical requirement of our profession.
For Big Tech? Not so much.
A real therapist will challenge, not just flatter you
Back to the idea of human vs. digital therapist, and why sycophancy is not what you need for your mental health. All forms of therapy include, at some level, the idea that we should not just nod along while our clients tell us about the self-destructive plans they have for the weekend. ‘I know I’m in 12-Step but I’m going to have one last blowout with my friends,’ they say as we nod along, smiling. ‘He keeps cheating but he’s just so handsome! I can’t stay away,’ they say as we tell them what a good, praiseworthy idea that is.
No!
A real, human therapist would listen, before kindly and respectfully explaining why those were terrible ideas and helping someone choose a better, more self-compassionate outcome. AI therapists would not do that, because sycophancy is baked into their design. Too much challenge and someone will switch them off and find another, more obsequious digital therapist to nod along.
I totally understand that, for many people, high-quality therapy is out of reach, for a number of reasons – cost being uppermost. And I strongly support widespread investment in mental health provision, so everyone can get the support they need. I and many other therapists offer low-cost or pro-bono places for those who can’t afford the full fee. Other options include public-health psychological services, charities and support groups, many of which offer help for free or at low cost.
But please, please, do not turn to a chatbot for help with your most precious, private, vulnerable needs. You cannot trust what will happen to your data. And as we see increasing numbers of legal cases against Big Tech for the harm being caused by these digital therapists, to incredibly vulnerable people, this is just the tip of the iceberg – I shudder to think about the people being dangerously misled and downright hurt by techno-shrinks.
Your mental health is far too valuable to put it into unscrupulous hands, however convenient they may be.
Love,
Dan ❤️
Enjoying Dan’s blog? Please make a small donation to support his work – all donations received will go to help Dan offer low-cost therapy or free resources to those who need them. Thank you 🙏🏼