Clear Sky Science · en
Numbers and measurement: a critique of evidence-based practice in psychology
Why This Matters for Everyday Therapy
When you go to a therapist, you probably hope for help that fits you as a person, not just a score on a test. This article asks whether today’s push for “evidence-based” psychology really delivers that kind of help. It looks closely at how numbers, rating scales, and a particular research method—randomized controlled trials—have come to dominate psychological practice, and it questions whether tools borrowed from physics and medicine are always the best guides for understanding human minds and suffering.
From Weighing Planets to Weighing Feelings
The story begins with the scientific revolution, when thinkers like Galileo and Newton turned physics into a model of exact, mathematical science. Their success created a powerful ideal: real knowledge was knowledge expressed in numbers and laws. Over time, this ideal spread from the “high sciences” of mechanics and astronomy to “lower” fields like biology, medicine, and eventually psychology. Early psychological pioneers worked hard to make inner life measurable, treating sensations and mental states as if they could be put on scales much like temperature or weight.

How Numbers Took Over Psychology
As statistics developed, researchers found ways to use averages, probability, and error curves to describe messy human realities. Social scientists began to treat traits like height, intelligence, and even moods as quantities that could be measured and compared across groups. In psychology, this led to formal theories of measurement and to widely used tools like the Beck Depression Inventory, which turns 21 experiences—such as sadness, guilt, sleep problems, and loss of appetite—into a single depression score. The authors argue that, in practice, such scales often function more as persuasive technical props than as precise instruments, because they compress shifting, personal experiences and changing diagnostic definitions into neat numbers that look more exact than they truly are.
Why Randomized Trials Are Not the Whole Story
Evidence-based practice in psychology places randomized controlled trials at the top of a hierarchy of evidence. These trials were first refined in agriculture and medicine, where they can work well for testing fertilizers or drugs. In that setting, random assignment, control groups, and statistical significance help separate real effects from chance. But when the same template is applied to psychotherapy, things get complicated. People know whether they are in therapy; the relationship with the therapist matters; and life problems rarely fit clean diagnostic boxes. The authors show how trials can give a misleading sense of certainty, focus heavily on p-values while ignoring deeper biases like publication of only “positive” findings, and strip away much of what makes psychological problems and treatments rich and varied.
What Gets Lost When We Reduce People to Scores
The article illustrates these concerns with a clinical trial of a specific therapy for depression that reports impressive improvements in average depression scores. Yet only a small, carefully selected subset of patients qualified for the study, and the report devotes many pages to fine-grained statistics on a mere 39 people. For the authors, this reveals a larger pattern: trials tend to narrow the range of people studied, reduce complex experiences to a handful of numbers, and then present those numbers as if they directly captured the reality of depression and recovery. Historical debates about intelligence testing show similar problems—turning “intelligence” into a single inborn quantity encouraged reifying a culturally loaded idea as if it were as concrete as a person’s height.

Toward a Richer Picture of Psychological Knowledge
In the closing sections, the authors argue that psychology should resist the dream of becoming a single, tightly unified “normal science” ruled by one favored method. Drawing on philosophers of science, they suggest that progress often depends on multiple, competing approaches rather than one dominant paradigm. Instead of letting randomized trials overshadow everything else, they propose a more courtroom-like way of thinking about evidence: different kinds of studies—quantitative experiments, qualitative interviews, case reports, and more—each provide clues that must be weighed together. In everyday terms, the article concludes that good psychological care should not be dictated by numbers alone. Rather, it should combine research findings with clinical judgment and the lived realities, cultures, and preferences of patients, accepting that no single metric can capture the full depth of human minds.
Citation: Berg, H., Fjelland, R. Numbers and measurement: a critique of evidence-based practice in psychology. Humanit Soc Sci Commun 13, 463 (2026). https://doi.org/10.1057/s41599-026-06832-w
Keywords: evidence-based psychology, randomized controlled trials, psychotherapy research, measurement in psychology, pluralism in science