Clear Sky Science · en
Automating the assessment of quality indicators using a clinical data warehouse: a pilot study on door-to-imaging time in stroke management
Why Every Minute Matters
When someone has a stroke, doctors race against the clock: the faster brain scans are done, the better the chances of limiting lasting damage. Hospitals are supposed to track how quickly they move from a patient’s arrival to their first brain scan, but today this is often checked by hand, one medical file at a time—a slow, error‑prone task. This study explores whether modern hospital data systems can automatically measure these delays, potentially freeing up staff time and giving health services a clearer, more timely picture of how well they are caring for stroke patients.
Turning Hospital Records into Useful Signals
The researchers focused on a simple but crucial yardstick: “door‑to‑imaging time,” the delay between a patient’s arrival at the hospital and their first brain scan. Using the clinical data warehouse of the Greater Paris University Hospitals—a vast repository pooling electronic information from 38 hospitals—they pulled together records for more than 6,000 adults hospitalized for acute stroke in 2022. For each stay, they combined administrative arrival times with technical information from the imaging system, which stores when a scan actually begins. By subtracting these time points, they let the computer calculate the delay automatically, instead of relying on staff to read and interpret each chart.

Checking the Computer Against Human Review
To find out whether this automated approach could replace the traditional method, the team compared it to France’s national quality audit, where hospital staff manually reviewed a sample of patient files. They matched 361 stroke cases that appeared in both the data warehouse and the manual audit and then compared the two door‑to‑imaging estimates. At the level of overall hospital performance, the two methods were strikingly similar: both found a median delay of about two and a half hours, and both agreed that just over half of patients received imaging within three hours of arrival. Statistical tests showed a strong level of agreement when classifying patients as above or below this three‑hour threshold.
Where Automation Stumbles
Looking more closely at individual patients, the picture was less tidy. For about three quarters of cases, the two methods were within one hour of each other, but the exact match was often off, and the overall agreement for patient‑by‑patient times was poor. The main problem lay in identifying the true moment of the first brain scan. Manual reviewers can pull this time from many places in the record—free‑text notes, imaging summaries, or specific forms—while the automated method relies on standardized technical data from the imaging system. In extra checks of 300 reports, these technical timestamps proved fairly reliable when all scans were properly recorded, but gaps in documentation—such as scans performed in another hospital or missing entries—created mismatches. In some cases the automated method picked the wrong scan; in others, human reviewers misread or inconsistently recorded the time.

Lessons for Better Data and Better Care
The study also exposed broader weaknesses in how hospitals record key events in the stroke journey. Even something as simple as “arrival time” can be ambiguous: a patient might receive initial care before being formally registered, and different staff may rely on different parts of the record. Because information can be duplicated and altered in several places, manual reviewers do not always agree among themselves either. The authors argue that improving how data are structured—standardizing imaging descriptions, making sure outside scans are logged in a consistent way, and harmonizing how arrival and imaging times are stored—would make both automated and manual measurements more trustworthy.
What This Means for Patients
In everyday terms, the study shows that computers can already provide a solid big‑picture view of how quickly hospitals deliver brain scans to stroke patients, using far less staff time than current audits. However, for examining an individual case—for example, to understand what went wrong for a particular patient—the automated method is not yet precise enough, especially when care is complex or involves multiple facilities. Until hospital data are cleaner, more complete, and better linked across sites, the authors suggest combining automated calculations with targeted human checks. Done well, this partnership between people and data systems could give health authorities a sharper, more reliable view of stroke care performance—and ultimately help ensure that fewer precious minutes are lost when a stroke strikes.
Citation: Hassanaly, O., Doutreligne, M., Troude, P. et al. Automating the assessment of quality indicators using a clinical data warehouse: a pilot study on door-to-imaging time in stroke management. Sci Rep 16, 12121 (2026). https://doi.org/10.1038/s41598-026-41833-4
Keywords: stroke care, clinical data warehouse, door-to-imaging time, healthcare quality, electronic health records