How Might AI Affect Psychotherapy?
AI is possibly one of the most versatile technologies of recent years. Among the many uses it has, one of them could be mental health service. Conversational AIs (like chatbots and digital assistants) can now provide "evidence-based psychological interventions" (Miner et al., 2019) similar to the ones you may expect from a human therapist.
AI can even give clinicians feedback about their psychotherapy. Feedback from an unbiased, truly neutral source (i.e., a computer) provides the best route through which a clinician can improve. Artificial intelligence can also make awkward or difficult conversations with young people about sex, suicide, and drug or alcohol use more comfortable for everyone involved.
While this field of collaboration is relatively new, it holds much promise. To assess the utility of AI in the mental health field, four dimensions were researched: Access to care (1), Quality (2), Clinician-patient relationship (3), and Patient self-disclosure (4) (Miner et al., 2019).
Important Definitions
Access to Care: Access to care describes how easily one can acquire needed care. Where is the nearest mental health clinic? Are there enough clinicians to take on new patients?
Quality: Quality describes how well the therapy intervention worked for both patient and clinician. Does the patient feel that they are supported, heard, seen, and understood? Does the clinician feel that they are doing a good job of supporting their client?
Clinician-Patient Relationship: This term describes the connection between the clinician and their patient. Do they trust each other? Does the clinician know what they need to to make well-guided decisions about the patient? How can AI enhance this relationship?
Patient Self-Disclosure: This describes the things the patient tells a therapist (or an AI). This can be anything from their name and hometown to their deepest, darkest secrets. This depends heavily on the clinician-patient relationship; if the patient does not trust their clinician, they will not disclose sensitive information about themselves, which hinders the relationship and the quality of care. Solid rapport must be established before truly quality care can be received or given.
AI's Predicted Impact on Mental Health Care
(Chart made with data from Miner et al., 2019)
Care Delivery Approaches
Let's explore the ways AI can be implemented into the mental health care system using the article from Miner et al. (2019). Though the allocation of tasks between traditional clinicians (humans) and AI has not yet been defined, we can outline four different possible approaches here. Like the authors, we will ignore "the implications of embodiment or presence" (Miner et al., 2019). Embodiment or presence, in this context, means a virtual avatar or robot that would actually be physically present with the client. We will instead focus on written and spoken language.
Above is the figure from the article that summarizes the four approaches and the authors' assumptions about each. Note that these have not yet been clinically implemented, and these are only predictions.
Humans Only Approach
Currently, mental healthcare is delivered by the "humans only" approach, meaning that only humans are providing mental health services. Psychotherapy sessions are typically not recorded unless for training or supervision purposes. When sessions are recorded for these purposes, the only listeners that will hear them are trained human clinicians. Due to privacy protections, it is critical that the recording not be released to any party who the patient has not expressly consented it to be released to. This is a well-established practice, so talking directly to AI or having the AI listen to psychotherapy sessions is a "departure" (Miner et al., 2019) from it.
Human Delivered, AI Informed
Departing from the humans only approach, the next stop is the human delivered, AI informed approach. This basically means that while therapy and interventions are still provided by a human clinician, they are informed by an AI. The authors of the Miner et al. (2019) article tell us a way this could be done. During a psychotherapy session, a listening device will be in the room, relaying the audio it hears to a software program that "detects clinically relevant information," (Miner et al., 2019), such as symptoms expressed by the patient and interventions that could be taken. The software then relays this information back to the clinician and/or patient. This way, the burden of figuring out what information is clinically relevant is taken off the clinician and put onto a software program. This technology is in its very early stages, so it may be some time before it is seen in practice, but theoretically, this could allow the clinician to be more present with their client and help them in more ways than just clinically.
AI Delivered, Human Supervised
The third approach to AI-assisted psychotherapy is the AI delivered, human supervised approach. What this means is that instead of speaking to a human clinician like one normally would, one would talk to a conversational AI. The goal of this is to get a diagnosis or treatment or both. The human clinician could screen patients first and hand off certain tasks to the AI, or the human clinician could supervise the session with the AI. This is a less-developed idea than the other three approaches, but it is very alluring. Of course, there are certain things that humans just do better than computers, such as expressing empathy and other emotions. Human clinicians have a certain warmth that computers are not necessarily capable of. However, combining the two could emphasize the best aspects of both and "outperform either people or computers alone" (Miner et al., 2019). One other thing to note about this approach is that on the table above, the "access to care" prediction column says "Improved, but limited scalability." This means that although this approach would increase people's access to care, it cannot be scaled up, meaning that it would reach a limited number of people and is hard to mass-distribute.
AI Only
The last approach Miner et al. (2019) discuss is the AI only approach. As the name suggests, this would mean that care is delivered entirely by AI and the only human involved is the patient. The patient speaks or types directly to a conversational AI that then provides interventions that are evidence-based and clinically tested. Note that the AI does not need to pass the Turing Test to provide impactful and quality care.
Care Dimension Impacts
We have already discussed the four dimensions of impact that were considered in this article (access to care, quality, clinician-patient relationship, and self-disclosure), so now let's discuss those more in detail.
Access to Care
There is a large shortage of mental health professionals in the United States, making it more difficult to access mental health care. The simple solution to this would be to encourage more people to study psychology or psychiatry, but because of the decline in 2008-2013 of psychologists and psychiatrists, this is currently not feasible. To supplement this shortage, we can use conversational AI since it is not bound by human attention spans or time availabilities. Because the amount of time spent by mental health professionals in meaningful conversations with patients has also been steadily decreasing, AI can make up for that. Plus, AI is non-consumable, meaning that the computer isn't going to run out of energy (unless it's unplugged) for the tough conversations people have in therapy sessions. Using AI as a supplement for therapy could also greatly reduce compassion fatigue and worker burnout, which in turn allows for more people to be seen (Miner et al., 2019)!
Quality
The quality of care is a great concern when it comes to implementing AI into a clinical setting, and it is a valid concern to have. Some clinicians already use text-based (usually SMS or email) services to provide interventions, most commonly crisis interventions. This shows that both patients and clinicians are willing to use technology to give or receive mental health care and to test out new ways of doing so. Innovations in computer technology allows AIs to learn human speech patterns and mimic them, and while that may sound creepy or weird, it is the best way to communicate with a person. Everyone has unique speech patterns, or ways they talk, and by mirroring people's speech patterns, AI can connect better with us. This technology is currently being used to analyze "successful crisis interventions in text-based mental health conversations" (Miner et al., 2019). This way, the AI we implement can mimic conversations and speech patterns that have definitely worked, thus increasing quality of care. Similarly, AIs have already interviewed some patients about post traumatic stress disorder (PTSD) symptoms and patients quite liked the experience. In one study, students believed they were speaking to an AI and reported feeling better after their conversation with the AI.
Just like anything though, using AI in clinical settings could have drawbacks. Some patients may have "inflated or unsubstantiated expectations" (Miner et al., 2019) about the AI, meaning they may expect more of it than it is capable of, and when they are disappointed, they may begin to stop trusting therapeutic interventions altogether. This matters for several reasons, the first being that if someone does not believe they are getting something good out of therapy, they are less likely to continue going. The second reason is that people who have a bad experience compared to what they expected have worse clinical outcomes than those who had good experiences. If the patient does not trust the AI, they are less likely to trust the human clinician as well. This is not to say that this is guaranteed, but it is something to be cautious and aware of (Miner et al., 2019)
Clinician-Patient Relationship
The relationship between clinician and patient is essential to having a good clinical outcome. Solid trust and rapport must be built between provider and receiver if any good is to come of the therapy. Therapeutic alliance is positively associated with the improvement of symptoms (more alliance, more improvement). It develops from the clinician's collaborative efforts with their patient and shows that they agree on treatment goals and how to achieve them. This alliance can be formed in many ways, like using kind and supportive language and showing empathy and warmth. Therapeutic alliance is not limited to human-to-human interactions; in fact, many users report feeling a sense of alliance when they talked to a conversational AI! By letting the AI handle some of the more monotonous, time-consuming tasks, like taking patient history or going over their symptoms, clinicians could allocate their time and skill to more important tasks, like conversation with their client. This is what could cut down on compassion fatigue and worker burnout. Hearing client after client tell their trauma and sadness is incredibly tiring, and while it is extremely important and helpful, it can really wear someone down. The use of AI could help prevent this. However, some critics of this argue that taking patient history and reviewing symptoms are exactly what build rapport and a good relationship between provider and receiver, and that using AI for such tasks puts distance between the clinician and the patient, which we do not want. This area needs further research to come to a conclusion (Miner et al.,2019).
Patient Self-Disclosure and Sharing
Self-disclosure is telling someone else about yourself. This could be anything from your favorite song or food to the secrets you don't tell anyone. As you could probably imagine, self-disclosure is paramount in talk therapy since the clinician can't help you if they don't know what's wrong. Usually in therapy, the topics you discuss with a clinician can be dark or touchy, like sexual history (including abuse), drug and alcohol use, thoughts of suicide or self-harm, and traumatic experiences. This, of course, is not all you could discuss with a therapist- they enjoy hearing good things too- but these are typically what makes someone go to therapy in the first place. Anything you tell a licensed therapist is confidential, meaning that unless they believe you are a danger to yourself or others, what you say in their office stays in their office (and in your file). Clinicians are bound by the Health Insurance Portability and Accountability Act, otherwise known as HIPAA. If a clinician, or anyone bound by HIPAA, shares something that is not supposed to be shared, they could lose their license to practice, be fined, or even jailed. Most patients understand that their therapist does not have a perfect memory and will probably forget a majority of what they say, which is why many clinicians take notes during sessions. Artificial intelligence, however, has almost limitless memory and can recall things for as long as needed. This could potentially cause some worry in some patients about the confidential nature of therapy.
According to Ho et al., 2018, you could disclose equally intimate and personal information to an AI or a human and expect similar psychological benefits, such as feeling understood, seeing a negative event or situation more positively, and even feeling better in the long-term. In fact, these findings are so strong that it doesn't even matter if your conversation partner is actually what you think they are- all that matters is what you think they are. If you believe, wholeheartedly and completely, that your conversation partner is an AI, you will still experience the benefits of self-disclosure, even if your conversation partner turns out to be a person. This means that self-disclosure itself (and its related processes) are what is truly beneficial, not who or what you disclose to.
A potential issue in implementing AI into clinical mental health care is liability. In 1969, a man was going to therapy at the University of California. He told his counselor that he was going to murder Tatiana Tarasoff, a fellow student. The therapist told the campus police that this student was dangerous and should be kept under watch. The police did detain him for a short while, but because he seemed "rational," they released him and said he was not a danger to anyone. The patient murdered Tarasoff and the counselor he had been seeing was sued by her family in 1974 for not warning Tatiana or her family. This established the "duty to protect," meaning that if someone says they are going to harm someone, the therapist must warn the intended victim, or tell their supervisor, who will then warn the intended victim. Should the therapist intentionally not warn the intended victim, they are "liable to civil litigation" (Miner et al., 2019), meaning the victim's family can sue the therapist in civil court for not warning them. AI's potential liability for these kinds of scenarios has not been tested, so before we can implement AI into psychotherapy, this must be explored and expectations clarified.
So, What's Going to Happen?
It's hard to say exactly what would (or perhaps will) happen should AI be implemented into the mental health field. Because this hasn't been clinically tested yet, all we have for now are predictions. So far, though, things are looking pretty good!
Some advantages of adding AI to your typical therapist visit include, but are not limited to, the following:
Lessening worker burnout and compassion fatigue.
Allowing clinicians to focus on clinically relevant details and help their patients more efficiently.
Making difficult conversations more comfortable for everyone involved.
Widening the scope of access to mental healthcare to more people.
Not being limited to human time and attention span.
There are, of course, some disadvantages as well.
Computers simply are not as "warm" and empathetic as people.
Potential harm to clinician-patient relationship.
People may not trust AI as much as they would a human clinician.
It may take time to accustom clinicians and patients alike to using this new technology, which may impact workflow.
In the case of a patient being a threat to someone else or themselves, the liability of the AI is unknown.
Summary of Meaningful Work, Workplace Psychological Well-Being, Mental Healthcare, and AI in the Workplace
Summary
Given how far AI has come in recent years, it is not too surprising that it is able to provide psychotherapy. Although it is certainly a novel use of it, the possibilities are mostly positive. It could impact psychotherapy in several different ways, with some positives and some negatives. Positives include decreasing worker burnout and compassion fatigue, decreasing individual workload and allowing clinicians to see more patients, and providing faster treatment to patients. Of course, no new tech is without its downsides, and implementing AI into psychotherapy could have its fair share. These include unknown liability of the AI software in cases like Tarasoff, complicated confidentiality considerations, and patients not disclosing as much to an AI as they would a human clinician... Or would they disclose more? To learn more, go to the self-disclosure tab above.
Check This Out!
This is a TED talk from Dr. Andy Blackwell who designed "8 Billion Minds," a program that deploys AI and deep learning to deliver mental healthcare online!