Does the Promise of AI Extend to Mental Health? 

For those tasked with purchasing mental health care benefits for their members, employees, students, and patients, understanding the potential role of artificial intelligence (AI) in mental health is crucial. From the early days of ELIZA, one of the first conversational AI programs designed to emulate a Rogerian psychotherapist in 1966, to today’s advanced language models and generative AI systems, the field has made remarkable strides. AI is no longer a far-fetched concept, but a rapidly evolving technology that promises to transform various aspects of care including enhancing mental health services, improving access, and personalizing treatment approaches. In the future we will be able to harness the power of AI’s ability to learn from data, automate tasks, and generate human-like content, enabling us to explore innovative solutions to address the growing mental health challenges faced by communities worldwide. 

Early Experiments with AI in Mental Health Care 

Some mental health startups see potential in applying AI to address a critical unmet need. Close to a billion people globally suffer from conditions like anxiety, depression, and addiction, yet there is a severe shortage of therapists to help them. Although there are a number of innovative solutions already being deployed to bridge the provider gap, the challenge remains daunting. Current mental health AI solutions aim to provide more accessible, stigma-free support, and personalized care: 

  • Programs are being developed at the University of Washington by a team of researchers led by Dr. Tim Althoff to provide suggestions for making peer-to-peer interactions more empathetic. 
  • Chatbots such as Woebot act as always-available virtual therapists that aim to utilize cognitive behavioral therapy methods and use natural language processing to provide information on mental health challenges. 
  • Ellie, a technology that is currently being tested in a research setting, is a virtual therapist that analyzes speech and facial cues to try to detect signs of problems like depression and PTSD.  
  • Apps such as Ada Health use AI-powered symptom checking and aim to provide tailored mental health content and recommendations matched to the user’s needs.  

The goal of these technologies is to increase access to mental health care by lowering barriers like cost, location, time constraints, and shame. While the possibilities offered by these innovations offer hope for a world with great need for mental health resources, there is still uncertainty as to whether AI can truly solve for the shortage of mental health providers. Can AI-powered tools really deliver effective, personalized mental health care without human intervention or oversight? 

One Foot in Front of the Other: Slow and Steady Advancement of AI-powered Mental Health 

The use of AI in the field of mental health is still in its infancy. Chatbots and conversational agents have begun assisting with basic therapy guidance, but some experts are concerned they lack the empathy, warmth, and clinical judgment to provide actual counseling or therapy. Rather than diagnosis or treatment, today’s mental health chatbots can point people to the right resources at the right times by detecting cues. For example, upon hearing evidence of depression, an AI assistant might offer links to programs, lessons, or coaching. However, a meta-analysis conducted by Simon Fraser University Social Sciences and Humanities Research Council (SSHRC) notes concern that accurately tracking how much a person improves over time and tailoring recommendations remains extremely difficult in mental healthcare, where progress is complex and nonlinear. 

More advanced AI applications in mental health, such as automatically generating progress notes by listening in on provider-patient sessions, can significantly decrease administrative burden. Providers, however, still need to evaluate and edit auto-generated notes as appropriate to ensure that the use of AI does not compromise the quality of care, lead to an overreliance on technology, or perpetuate biases present in training data. Additionally, the privacy vulnerabilities posed by commercial AI in mass data collection presents challenges for sensitive mental health applications. As seen in industries like social media and search, the priorities around open access and data mining for companies developing AI do not always align with protecting patient privacy. This issue is being actively addressed by major U.S. companies and has garnered the attention of The White House, which issued a National Strategy to Advance Privacy-Preserving Data Sharing and Analytics

Experts argue that the use of unvetted AI/machine learning (ML) tools in mental health currently amounts to a “Wild West” in active delivery of health care, rather than fully validated medical algorithms and protocols. More research, oversight, and health provider input are desperately needed to harness AI’s potential while applying ethical guardrails regarding efficacy, privacy, and unintended consequences. For now, organizations can likely gain more leverage from analyzing their own data to understand patients’ needs better, whether by identifying who needs outreach at high-risk moments or which individuals would benefit most from certain intervention types.  

AI is far from being able to fully replace human providers, but it may enhance care for specific use cases in the future under guidance from mental health experts, such as supporting triage and screening assessments, scheduling, documentation of care, and data analysis to support targeted outreach.