AI and Psychological Well-Being

Stade, E. C., Stirman, S. W., Ungar, L. H., Boland, C. L., Schwartz, H. A., Yaden, D. B., Sedoc, J., DeRubeis, R. J., Willer, R. & Eichstaedt, J. C.
NPJ Mental Health Research
Large language models (LLMs) such as OpenAI’s GPT-4 (which powers ChatGPT) and Google’s Gemini hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high-stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.

Schöne , J. P., Salecha, A., Lyubomirsky, S., Eichstaedt, J., & Willer, R.
Working Paper
Millions of people now use AI-powered chatbots to support their mental health, yet little is known about whether such interactions can effectively enhance psychological well-being. We conducted a preregistered experiment on a large, diverse sample (N = 2,922) to test four AI chatbots, each prompted to employ a multi-step strategy drawn from prior psychological research on sources of happiness and meaning in life. Chatbots encouraged participants to either (a) savor positive life experiences, (b) express gratitude toward a friend or family member, (c) reflect on sources of meaning in their life, or, (d) reframe their life story as a “hero’s journey.” All four chatbots led to improvements on a broad range of psychological well-being outcomes –including affective well-being, meaning in life, life satisfaction, anxiety, and depressed mood –relative to a control chatbot condition. These results generalized to key subpopulations, including those with high baseline levels of anxiety or depression. Chatbot interactions increased interest in seeing a human therapist, including among those who were previously unwilling or had never attended therapy. A separate, nationally representative survey (N = 3,056) found that half of U.S. adults expressed interest in using empirically validated AI chatbots for mental health support. These findings demonstrate that AI-driven well-being chatbots grounded in psychological research offer a scalable and effective way to produce short-term increases in several aspects of psychologicalwell-being. Importantly, these results do not generalize to all AI-based emotional support.
BlueSky Thread: AI Dialogues; X Thread: Chatbots
Guides the user through a compact Hero’s Journey to reframe challenges, strengths, and next steps. Helps the user view their life as a story and identify actions aligned with their values.
Prompts the user to reflect briefly on what gives them purpose—people, roles, and commitments that matter. Surfaces themes the user can carry into daily decisions.
Leads the user through a short exercise to express gratitude to a specific person. Encourages the user to notice others’ contributions and clarify what they appreciate.
Guides the user to slow down and revisit a positive moment in detail—sensations, thoughts, and meaning. Helps the user extend and consolidate the experience so it is easier for them to access later.
Important: These demos are educational research prototypes and do not provide medical or mental health advice. They are not a substitute for diagnosis, treatment, or therapy from a licensed professional and do not create a clinician–patient relationship. If the user is in crisis or considering self-harm, they should contact local emergency services; in the U.S., call or text 988.