Clear Sky Science · en

A scoping review on using real-world data to evaluate the effectiveness of mHealth applications

· Back to index

Why Your Health Apps Matter Beyond Your Phone

Many of us now log our mood, sleep, steps, or blood sugar in health apps, but what happens to all that information? This paper explores how real-life data from mobile health (mHealth) applications are being used to judge whether these tools truly help people in their everyday lives. Instead of relying only on traditional clinical trials, the authors look at studies that tap into the data apps naturally collect while people use them at home, at work, and on the go.

Figure 1
Figure 1.

What the Researchers Set Out to Discover

The authors conducted a scoping review, a kind of broad mapping exercise, to see how real-world data from health apps are currently used in published research. They focused on patient-facing apps that people use independently to manage health, track conditions, or support lifestyle change. Crucially, they only included studies that used “naturally emerging” data—information captured through normal app functions or routine healthcare records, not extra questionnaires or tools bolted on just for a study. They grouped this data into three simple types: information people type into an app, data automatically recorded by devices like sensors or wearables, and information drawn from health systems such as electronic health records or insurance claims.

Where Health Apps Are Being Put to the Test

From over ten thousand papers, the team identified 72 studies that met their criteria, covering 61 different apps. Most of these apps were aimed at mental health problems, such as depression or insomnia, or at metabolic issues like diabetes and weight management. Many of the apps function as medical tools in practice, helping to guide treatment or day-to-day decisions, even if their official regulatory status is not always clearly reported. Mental health apps tended to rely heavily on what users typed in about their mood, sleep, or symptoms, while metabolic apps more often drew on connected devices, such as glucose monitors or smart scales that record measurements automatically.

What Kind of Data These Apps Actually Use

The review found that most studies leaned on actively entered information, like symptom surveys within the app, with fewer making strong use of passive data from sensors or healthcare systems. Around seven in ten studies used user-entered data, often scores about pain, mood, or sleep. About a quarter used device-generated data, and only a small fraction linked in data from medical records or insurance claims. Many apps collected information continuously or at least very frequently, but the way researchers analyzed this rich stream was often surprisingly limited. Few studies combined multiple data sources—for example, joining self-reported wellbeing with sensor readings—despite the promise that such combinations might give a fuller and more reliable picture of health.

Figure 2
Figure 2.

How Strong Is the Evidence So Far?

When the authors looked at how the studies were designed, they found that most were relatively simple before-and-after comparisons within a single group of users, without a control group to compare against. Only a small number used more rigorous approaches, such as comparing app users with similar non-users, or running pragmatic randomized trials that more closely mirror real-life care. As a result, many current studies can show that people’s symptoms changed while they used an app, but they cannot confidently claim that the app itself caused those changes. Study sizes varied widely, from a few dozen people to hundreds of thousands, and follow-up often lasted only a few months, meaning long-term effects are still poorly understood.

What This Means for Patients and Future Digital Care

Overall, the review paints a picture of great promise but unfinished work. Health apps are clearly capable of capturing large amounts of real-world information about how people feel and function, and this data could support ongoing, flexible checks of how well digital tools perform once they are on the market. Yet, so far, most published studies make only partial use of this potential. They rely heavily on self-reports, use limited study designs, and rarely link app data with clinical records. To give clinicians, regulators, and patients greater confidence, future evaluations will need to blend different kinds of real-world data, follow people for longer, and use better comparison methods. Done well, this could turn everyday app use into a powerful engine for learning what truly works in digital health care.

Citation: Gehder, S., Brückner, S., Gilbert, S. et al. A scoping review on using real-world data to evaluate the effectiveness of mHealth applications. npj Digit. Med. 9, 309 (2026). https://doi.org/10.1038/s41746-026-02562-0

Keywords: mobile health apps, real-world data, digital therapeutics, health data research, evidence-based mHealth