Clear Sky Science · en
Human-AI teaming to improve accuracy and efficiency of eligibility criteria prescreening for oncology trials: a randomized evaluation trial using retrospective electronic health records
Why Finding the Right Patients Matters
For many people with cancer, joining a clinical trial can open the door to cutting-edge treatments and better outcomes. Yet only a small fraction of adults with cancer ever enroll. One major bottleneck happens long before a patient signs a consent form: staff must dig through long, messy medical records to see who even qualifies. This study asks whether pairing human experts with an artificial intelligence system can make that early screening more accurate—without slowing the process down.
How Trial Screening Works Today
Before a person can join a cancer trial, clinical research staff must decide whether the patient meets dozens of detailed entry rules, such as cancer type, stage, test results, and how well they are functioning day to day. Much of this information is buried in unstructured notes—radiology reports, clinic visits, lab summaries—that are often repetitive, incomplete, or contradictory. Manually combing through these documents is slow and exhausting, and even experienced staff can miss key details. As a result, some eligible patients are never identified, and potential life-prolonging options are lost.
What the Researchers Tested
To see if AI could help, the team used electronic records from 355 people with lung or colorectal cancer treated in a community practice. They focused on 12 common trial criteria, including tumor stage, specific biomarkers, prior treatment response, and basic health status. A specialized “neurosymbolic” language system first converted scanned charts into text, then identified structured facts such as test results and staging details. Two trained research coordinators then reviewed every chart twice—once with AI suggestions on screen (the Human+AI approach) and once without them (the Human-alone approach), in random order. A separate group of clinicians had already created a “gold standard” answer key for each chart to judge accuracy.

How Well the Human–AI Team Performed
When humans and AI worked together, they matched the gold-standard answers more often than humans working alone. Overall, the Human+AI team got about three out of four details right, compared with a little over seven out of ten for human reviewers alone, and far better than the AI system on its own. The biggest gains came in tricky areas such as biomarker testing and results, the precise staging of the tumor, and how a patient had responded to earlier treatments. In these categories, the AI’s strength at sifting through large volumes of text helped coordinators spot information they might otherwise overlook, while the humans corrected AI missteps and interpreted uncertain cases.
Speed, Trade-offs, and Human Bias
Surprisingly, adding AI did not make the process faster. Both approaches took a little over half an hour per chart on average. The authors suggest that, instead of saving time, the AI shifted the coordinators’ work: rather than hunting for every detail themselves, they spent more effort checking and interpreting AI-suggested entries. This may actually be a healthy safeguard, reducing the risk that people simply accept the machine’s answers without question. The study also probed where collaboration can go wrong. In one measure of patient functioning, the AI was unreliable, and human reviewers who leaned too heavily on its output did slightly worse—a sign of “automation bias.” In other areas, humans seemed to underuse accurate AI signals, hinting at “confirmation bias,” where people prefer information that matches their first impressions.

What This Means for Future Cancer Care
In plain terms, this trial shows that a well-designed partnership between people and AI can make early trial screening a bit more accurate without slowing it down. The improvements are modest, but they are concentrated in exactly the kinds of complex details—like biomarker status and precise staging—that often decide whether a patient can join a study. If such systems are further refined and tested in live clinic workflows, they could help uncover more eligible patients, broaden who gets access to cutting-edge oncology trials, and do so while keeping humans firmly in charge of the final decisions.
Citation: Parikh, R.B., Kolla, L., Beothy, E.A. et al. Human-AI teaming to improve accuracy and efficiency of eligibility criteria prescreening for oncology trials: a randomized evaluation trial using retrospective electronic health records. Nat Commun 17, 2306 (2026). https://doi.org/10.1038/s41467-026-68873-8
Keywords: cancer clinical trials, electronic health records, artificial intelligence, patient eligibility, human AI collaboration