Can You Really Trust an AI Therapist?
Have you ever used AI for advice, or even therapy? If so, you are certainly not alone. I have noticed an increasing number of my clients turning to ChatGPT for help and support between sessions. And a survey last year found that a majority of young Americans were comfortable discussing their concerns with an AI chatbot, with 55 per cent of 18-29-year-olds happy to discuss their personal lives with a non-human therapist. As AI seeps into every area of our lives, it’s not surprising that people are now turning to this new technology for therapy too. But is this safe? And can an artificial intelligence ever replace a human one?
I obviously have some skin in this game, as it does make me nervous to watch AI threaten the jobs of large sections of the workforce, including mine. It seems crazy to me that these technologies are being rolled out with minimal oversight, making a small number of people very rich while fundamentally changing the way billions of people live and work. But whatever my concerns about these changes, this post asks a more basic question: should you really use ChatGPT for help with your mental health?
Concerns about confidentiality
One of my biggest worries about people using AI in this way is the confidentiality, or lack of it. People give these AI models access to their most intimate, personal data imaginable. Let’s say you were having an affair with your best friend’s wife. If you provided ChatGPT with all the details and then got advice in return, how could you be sure that information was being securely stored? Or you had a problem with addiction, or ongoing struggles with your mental health, which you were keeping secret from your employer. Given the way Facebook has been found, over and over again, to have misused personal data, I do not trust the Big Tech companies rolling out these AI models to store data safely. ‘Move fast and break things’ is, famously, their motto. And they are driven by voracious profit-seeking, not care and concern for those who use their services, who may be vulnerable or even in serious crisis.
I am very careful about the online services I use and go to great lengths to keep my clients’ data safe. I only use the most secure and non-data-harvesting services like Firefox for my browser, Ecosia for internet searches, Dropbox to send information and ProtonMail for my emails, which is end-to-end encrypted. Ensuring confidentiality is foundational for all therapy, because you are telling your therapist things you may never have told another soul. I ensure confidentiality from the first session with my clients, as it’s so important to help them feel safe. Should you really give that information, freely, to ChatGPT? In my opinion, we need to think more critically about these services, rather than trusting them to keep our innermost secrets.
Is AI really intelligent?
Another thing that concerns me is that, although AI models like ChatGPT are very convincing at sounding empathic, insightful and intelligent, they are essentially fancy versions of the autocorrect function that infuriatingly mangles words in your text-messaging app. They are trained to respond to prompts in your text and, like all technology, want to retain your attention for as long as possible. So they basically say whatever they think you want to hear (in fact, a recent ChatGPT update was withdrawn because it became ‘dangerously sycophantic’). AI has also been found to make up research to sound more convincing.
This may all sound a bit old-fashioned – and defensive, coming from a flesh-and-blood therapist! To be clear: I’m not against technology in general and AI in particular. It clearly has huge potential in areas like medical research, but like all technology, AI is essentially neutral – it’s neither good nor bad. But it’s the companies that develop and disseminate it we should keep a watchful eye on. And Big Tech companies like Meta have done tremendous harm, causing worryingly high levels of anxiety and depression in the young, spreading misinformation, amplifying dangerous conspiracy theories, helping far-right leaders and autocrats manipulate voters and boosting misogynistic influencers like Andrew Tate. Should we really trust these companies with our most tender, personal and vulnerable secrets? I think we all need to tread with great care.
I hope that’s helpful and, at the very least, thought-provoking. We are entering a brave new world of technology, so we should use these new tools to enhance our lives, while proceeding with caution where they could be harmful.
Love,
Dan ❤️
Enjoying Dan’s blog? Please make a small donation to support his work – all donations received will go to help Dan offer low-cost therapy or free resources to those who need them. Thank you 🙏🏼