Clear Sky Science · en
A qualitative interview study investigating patient, health professional, and developer perspectives on real-world implementation of patient-centered AI systems
Why this matters to everyday families
As computers grow smarter, many people hope artificial intelligence (AI) will help doctors spot health problems early and tailor care to each person. Yet, even very accurate AI tools often fail to make a real difference in people’s lives once they leave the research lab. This study looks closely at what patients, health professionals, and AI developers think it will take to bring patient-centered AI into everyday care, using a tool that predicts the risk of postpartum depression during pregnancy as a real-world example.

Listening to three groups at once
The researchers interviewed 36 people: pregnant or recently pregnant patients, health professionals who care for them, and AI developers familiar with mental or reproductive health. Instead of just measuring whether the algorithm was accurate, the team focused on how it would actually fit into people’s lives and clinical routines. They asked each group how they felt about seeing or using AI-generated risk scores for postpartum depression, how that information should be shared, and who should be responsible if something went wrong. By combining these voices within a single framework, the study revealed tensions and common ground that are hard to see when only one group is asked.
Fears of harm and hopes for real help
Across the board, participants agreed that any patient-centered AI must do more good than harm. Patients worried that a high-risk score for depression could deepen stigma or even trigger involvement from child protective services. Health professionals focused on the danger of biased or low-quality data, especially if certain racial or social groups were underrepresented. Developers emphasized the need to identify and reduce these problems while the tools are being built, not after they are deployed. At the same time, everyone stressed that the tool must lead to clear, useful actions—like easier referrals to mental health or social support—rather than just another number in the chart. When risk scores were shown without obvious next steps, both patients and clinicians were unsure how to respond.
How information should be shared
People had strong and varied views on how AI results should reach patients. Some wanted to see their risk scores ahead of appointments through an online portal so they could prepare questions and process emotions in private. Others preferred to learn about results only in conversation with a trusted clinician who could explain what they meant. All groups agreed that explanations should be in plain language, not medical or technical jargon, and that patients would need guidance to interpret what a certain percentage or risk band really means for their lives. Many patients wanted to know which specific factors, such as past health history or current stress, were driving their score so they could talk with their care team about what might be changeable.
Trust, privacy, and shared responsibility
Trust in AI turned out to be tightly linked to questions of privacy and accountability. Participants wanted reassurance that personal data, especially sensitive mental health and pregnancy information, would be stored and used safely. Patients often assumed that anything visible in their online record was already vetted and accurate, which raises stakes for mistakes. Health professionals worried about being blamed if they followed—or chose not to follow—an AI suggestion that later turned out badly. Developers argued that responsibility should be shared: health systems, regulators, clinicians, and AI creators all play a role in checking for bias, monitoring performance over time, and deciding how and when tools should be used in care.

What needs to change next
From these conversations, the authors outline several practical steps to make AI more genuinely patient-centered. Health professionals need training, time, and supportive tools to interpret AI output and discuss it with patients. Patients and frontline clinicians should be involved early in the design and testing of AI systems so that questions of fairness, usefulness, and communication are addressed from the start. AI outputs should be presented in flexible, easy-to-understand formats that highlight options for action, not just risk levels. Finally, clear, multi-layered rules and oversight are needed to protect privacy and share responsibility across institutions, regulators, and developers. Together, these changes could help move AI from impressive lab results to safer, more meaningful support for families during pregnancy and beyond.
Citation: Benda, N., Desai, P., Reza, Z. et al. A qualitative interview study investigating patient, health professional, and developer perspectives on real-world implementation of patient-centered AI systems. npj Digit. Med. 9, 352 (2026). https://doi.org/10.1038/s41746-026-02587-5
Keywords: healthcare artificial intelligence, postpartum depression, patient-centered care, digital health ethics, mental health screening