Why I Will Not Use Artificial Intelligence Tools for Therapeutic Work
A Short statement outlining my “AI-Free” stance, focusing on the ethical concerns of PRivacy and competency.
Currently—as with nearly all industries—private companies are rolling out a suite of “AI-powered tools” for mental health workers. I do not and will not use these tools, and this post (partially) explains my position. In fact, there are many reasons for resisting the normalization of artificial intelligence in therapy, and I feel passionately about all of them. But, for now, I will unpack two which focus on two key ethical obligations of clinicians: privacy and competency.
AI in therapy constitutes an unjustifiable privacy risk
As a Registered Clinical Counsellor, I follow a code of ethical conduct and standards of practice that communities of clinicians have created over decades of experience. That means, among much else, that I do everything possible to protect the privacy of what my clients express and experience during their sessions.
Currently, “AI scribes” are being touted as tremendous “time-savers” for therapists. These services “listen in on” sessions, transcribing and analyzing them, producing summaries that fulfill a clinician’s obligation to take notes after sessions.
What’s the problem with saving time in this way? Whenever therapists trust third party service providers with sensitive client information, we are taking risks. Clearly, these companies’ security breaches will become our security breaches.
So, why would I take this risk for services that are, at best, unnecessary, and at worst, harmful to other aspects of client care? I will explain that harm now.
AI tools erode the competency of therapists
Practicing with competence—doing your job to the best of your ability—is another core ethical principle of mental health work. I contend that use of artificial intelligence represents a serious threat to the current and future competence of mental health workers.
Once again, many of these AI products “listen to” and summarize therapy sessions. This “solution” is meant to remove the burden of producing clinical notes after each session, a core task of therapy. Clinical notes capture elements such as topics discussed, interventions used, progress made, and ideas for what to try next. As such, in most practices, these notes represent the gradual unfolding of a “case conceptualization”.
Case conceptualization is just what it sounds like: it’s the product of a therapist conceptualizing or making sense of the work they are doing with a client. As such, it answers questions like:
Why is my client suffering?
What can we do about it?
How can we translate this into an actionable plan for treatment?
In other words, case conceptualization represents a large portion of the therapist’s work. This is behind-the-scenes work that a client is probably not aware of, and it is essential for good care. It’s also work that artificial intelligence is fundamentally incapable of doing. AI is not equipped to understand the nuances of attachment, emotional expression, or innumerable other aspects of a fundamentally human experience. A “scribe” cannot tell me what actually happened, or where a client and I should go, based on the words spoken during session. It is painfully obvious to me that these AI tools were created by people, well-meaning or not, who do not understand what therapy is or how it works.
Furthermore, research indicating the destructive impact of AI on cognitive abilities is emerging, and this has direct implications for therapist competence. Institutions like MIT are uncovering a troubling phenomenon associated with the use of Large Language Models (e.g., AI scribes): “cognitive debt”. In brief, this means that taking the easy way out has a serious cost. When we do not invest effort into our mental work, we reduce our critical thinking skills, our memory, our creativity, and our basic engagement with the task at hand. Over time, this “debt” grows, making us less cognitively competent.
So, why would I use tools that will make me less engaged with what is happening between me and my client? Less likely to remember the details of what we discussed or experienced? Less capable of thinking critically or creatively about our work together? Why would I agree to become increasingly less competent in all of these capabilities the longer I use these so-called tools? Forgive me for this truly questionable pun but making the choice to work without AI “support” is, for me, the real no-brainer.