Clear Sky Science · en

Matching clinicians with clinical trials using AI

· Back to index

Why Finding the Right Doctors for Trials Matters

Every new medicine or vaccine must be tested in carefully designed clinical trials. Yet many trials struggle to find enough volunteers, or they enroll patients who do not reflect the real-world population that will use the treatment. The authors of this study developed an artificial intelligence system, called DocTr, that helps trial organizers choose which doctors and clinics should run a study. By improving this “site selection” step, the system aims to speed up access to new therapies while making research more inclusive and cost‑effective.

Figure 1
Figure 1.

The Hidden Bottleneck in Medical Research

Clinical trials often fail not because a treatment is ineffective, but because the right patients are never enrolled. Traditionally, pharmaceutical companies rely on manual searches, personal networks and guesswork to decide which doctors to invite. This process can be slow, biased toward a small circle of well‑known investigators and blind to promising sites that care for diverse communities. The result is sobering: many trial locations enroll far fewer patients than planned, some enroll none at all, and delays can cost sponsors hundreds of thousands to millions of dollars per day.

Teaching a Computer to Match Doctors and Trials

DocTr tackles the problem by learning from several large, real‑world data sources. First, it reads public trial descriptions from ClinicalTrials.gov, including the diseases being studied and who is eligible to enroll. Second, it uses anonymized insurance claims to build a profile of each clinician based on the types of patients they treat—essentially, a five‑year snapshot of their practice. Third, it taps into the US OpenPayments database, which records industry payments to clinicians linked to specific trials. Those past payment links serve as a stand‑in for which doctors actually worked on which studies, giving the system examples of successful matches to learn from.

How the AI Learns from Text, Numbers and Networks

To combine these ingredients, the researchers built a model that understands both language and patterns in data. One component uses a medical version of the BERT language model to turn trial summaries and eligibility rules into mathematical vectors that capture meaning. Another component summarizes each doctor’s mix of patient diagnoses into a compact representation. A third piece treats the trial–doctor history as a network and uses graph learning techniques to capture who has worked with whom and in what areas. DocTr blends these signals into a single match score for every potential trial–doctor pair, then ranks clinicians for each new study.

Better Matches, Fairer Enrollment and Fewer Conflicts

When tested on nearly 25,000 US clinicians and more than 5,000 trials, DocTr produced recommended clinician lists that were about 58% more similar to real‑world trial rosters than the best existing methods. Crucially, the system also looks beyond accuracy. A built‑in optimization step reshuffles the top candidates to promote diversity in race, ethnicity and geography, while avoiding doctors who are already busy with many other studies. This process increased diversity scores compared with current practice and cut the average number of overlapping trials for recommended clinicians to almost zero, without sacrificing match quality.

Figure 2
Figure 2.

Seeing Around Corners on Cost and Planning

Because DocTr learns from payment records as well, it can estimate how expensive recruitment might be for a new trial or for a given clinician. By finding past trials and doctors with similar profiles, it produces cost and enrollment forecasts that track real data reasonably closely. These forecasts are not full budgets, but they give sponsors a way to compare options, spot unusually costly plans and choose recruitment strategies that balance speed, diversity and expense.

What This Means for Patients and the Future

The study shows that smart use of existing data can make clinical trials more reliable, faster and fairer. DocTr cannot fix every source of bias—such as restrictive eligibility rules written into a protocol—but it can widen the circle of doctors considered and help include communities that have often been left out of research. If adopted and carefully governed, systems like DocTr could shorten the path from lab discoveries to real‑world treatments, while giving more patients a chance to take part in shaping the medicines of tomorrow.

Citation: Gao, J., Xiao, C., Glass, L.M. et al. Matching clinicians with clinical trials using AI. Nat. Health 1, 290–299 (2026). https://doi.org/10.1038/s44360-026-00073-6

Keywords: clinical trial recruitment, artificial intelligence in medicine, trial site selection, health equity, medical data analytics