In less than a decade, Artificial Intelligence has gone from a sci-fi trope to a routine part of our lives. Innovators from every industry are looking at ways to use AI to enhance our lives, and the mental health field is no exception. Recent years have seen a full spectrum of AI-based apps, all focused on mental health, whether that means supporting mental health providers, guiding users to a mindful lifestyle, or even replacing therapy.
With the state of mental health in America, AI therapy seems to be an elegant solution to the problems at hand. According to some studies, 26% of Americans have been diagnosed with a mental health issue, and 50% of Americans report extreme loneliness. Unfortunately, we simply don’t have enough trained therapists to combat these numbers. In the US, there is just one therapist for every 340 people—and around 61% of providers report being burned out.
Many AI tools have been designed to support these providers, aiding in administrative tasks or suggesting diagnoses and treatment plans based on therapist-led sessions. Many more tools aim to target the problem directly by working with the population to augment or even replace therapy.
AI uses pattern recognition to simulate human behavior, and (assuming it’s trained with unbiased data) operates without the assumptions and prejudices that humans are prone to. This advantage paints a promising future for AI use in mental health. However, there is still a great deal of skepticism among industry experts.
The Darker Side of AI
The main concern around AI therapy is the threat to vulnerable individuals. Because AI is still relatively new, there is precious little regulation around it. That means that digital therapists who take advantage of a patient’s vulnerability won’t face the same consequences as humans.
And unfortunately, these fears are not unfounded. There have been reports of users getting “dropped” when they disclose serious issues such as suicidal ideation, with the bot directing them to 911 instead of connecting them to crisis care. More severe incidents involve bots going rogue and suggesting self-harm or aggressive behavior towards others. Many vulnerable individuals become addicted to AI therapy bots, which in severe cases leads them to act on these suggestions.
Promising Research
Do these results imply that AI should be strictly limited to assisting mental health providers, rather than interacting directly with the general public? As with most hot-button issues today, the answer isn’t cut-and-dry. Several recent studies suggest that AI therapy tools can positively impact users’ mental health.
One study involving AI trained in clinical best practices was praised for its rigorous methods by the American Psychological Association, which has been advocating caution in the face of unregulated therapy bots. This study found a significant improvement in mental health symptoms when participants worked with their AI therapy bot. One of the primary advantages was the availability of AI; users could talk to their bots anywhere, anytime. As one researcher explained, “ We had folks that were messaging it about their insomnia symptoms in the middle of the night and getting their needs met in these moments.”
Similar studies have also enjoyed promising results, likely due to their strict adherence to current mental health best practices in training their AIs.
AI and Your Benefits Package
If AI is, as many believe, the future of mental health, how does that fit in with mental health parity, EAPs, and other employee benefit offerings? Moreton & Company has set out to investigate this question by pioneering an employer-based AI pilot program. This test features an exclusive arrangement with Lore Health, an app that aims to combat burnout by building up resilience in employees. This app functions essentially as an AI-powered EAP, with a conversation-based “preemptive care” approach designed to nip issues in the bud before they become insurmountable.
As a benefit offering, results have been mixed. While many employees embraced the app and provided positive feedback, many more declined to engage with it at all. In the end, the return on investment for such an offering is difficult to determine. Part of this issue can be alleviated through cost structure (for example, charging a per-employee-per-month fee versus a percentage of ‘savings’). Still, most of the difficulty is due to the nature of wellness programs in general, and mental health offerings specifically.
The tangible benefits of mental health programs are difficult to quantify; it’s much more difficult to connect these programs with a reduction in claims than with something like a smoking cessation program or weight loss initiative. Ultimately, implementing mental health AI programs as an employee benefit may be a tough sell for employers as they struggle to answer the key question: Is the risk worth the reward?