Clear Sky Science · en
Language-based assessments can predict psychological and subjective well-being
Why Words Can Reveal How We’re Really Doing
Most of us have filled out check-box surveys about happiness or mental health. But our moods and sense of purpose are usually expressed in stories: what we say about our lives, our goals, our relationships. This article explores whether modern artificial intelligence can listen to those stories—written or spoken—and estimate how satisfied and fulfilled we feel, potentially offering a new way to monitor well-being in everyday life.
Two Kinds of “Doing Well”
Psychologists often distinguish between two broad types of well-being. One is subjective or “hedonic” well-being: feeling good, having more positive than negative emotions, and being generally satisfied with life. The other is psychological or “eudaimonic” well-being: feeling that life is meaningful, that we are growing, self-directed, and living according to our values. While AI tools have already shown that they can estimate life satisfaction from short text responses, it has been unclear whether they can also detect deeper qualities like autonomy—the sense that we are making our own choices—and other facets of psychological health.
Listening to People’s Reflections
Across three studies, adults and college students were asked to answer open-ended questions about their lives. Some prompts focused on life satisfaction (for example, “Overall, are you satisfied with your life or not?”) while others probed aspects of psychological well-being, such as autonomy (“In what ways are your decisions influenced—or not—by what others are doing?”), personal growth, relationships, and purpose. Participants responded either by writing paragraphs or speaking for at least a minute; their audio was transcribed into text. Everyone also completed standard rating-scale questionnaires for life satisfaction and psychological well-being, which served as comparison benchmarks.

How AI Turned Stories into Scores
The researchers fed the text of these reflections into advanced language models based on transformer technology, which represent each response as a high-dimensional numerical pattern. Using statistical methods, they trained models to predict people’s questionnaire scores from these patterns and checked how closely the predictions matched reality. In the first two studies, the models did a decent job: language-based predictions for autonomy and life satisfaction were moderately related to people’s actual scores, and they also showed some ability to generalize to related traits such as feeling capable, connected to others, or purposeful. However, these correlations were clearly lower than those reported in earlier work that used much shorter, keyword-style responses instead of narratives.
Life Satisfaction Is Easier to Hear Than Autonomy
The third and largest study sharpened the picture. Here, written responses about life satisfaction allowed the model to predict questionnaire scores quite well, while predictions for autonomy were noticeably weaker. When the team compared their system to cutting-edge AI models (GPT-3.5 and GPT-4), the newer systems were even better at reading life satisfaction from language but only modestly better at reading autonomy. To understand why, the authors examined which words tended to appear in high- and low-scoring responses. High life satisfaction went hand in hand with positive emotion and social words—terms like “love,” “grateful,” “spouse,” and “friends.” Low satisfaction responses, by contrast, leaned on uncertain, problem-focused wording such as “think,” “seem,” and “maybe.”

Why Inner Freedom Is Harder to Read
Language linked to autonomy looked different. People who scored lower on autonomy used many cognitive and evaluative words, suggesting worry, second-guessing, and trying to meet outside expectations. Those with higher autonomy also used reflective language, but mixed it with action and agency—words related to choosing, doing, and moving toward goals. Rather than a handful of common keywords, autonomy seemed to be expressed in highly individual ways that depended on each person’s life context. This made it harder for AI models, even very powerful ones, to pick up a simple linguistic signature of this deeper psychological quality.
What This Means for Real-World Use
Overall, the article concludes that language-based tools are already quite good at estimating whether people feel satisfied with their lives, especially when using state-of-the-art AI. But they struggle more with subtler, more personal dimensions of well-being like autonomy and other aspects of meaning and growth. For now, these tools might be useful as low-burden, context-rich complements to traditional surveys—helping researchers track broad trends in happiness from everyday writing or speech. Yet they are not ready to replace careful, multi-method assessments in mental health or clinical settings, particularly when decisions depend on understanding the more complex, inner layers of how people experience their lives.
Citation: Mesquiti, S., Cosme, D., Nook, E.C. et al. Language-based assessments can predict psychological and subjective well-being. Commun Psychol 4, 33 (2026). https://doi.org/10.1038/s44271-026-00400-3
Keywords: well-being, life satisfaction, autonomy, language analysis, artificial intelligence