Clear Sky Science · en
Investigating expectations and needs regarding the use of large language models at Bavarian university clinics
Why this matters for patients and professionals
Hospitals are beginning to experiment with the same chatbots many people now use at home, but for far more serious tasks. This study looks at how doctors, medical students, and hospital staff at Bavarian university clinics view these tools: what they already use them for, what they hope to gain, and what makes them uneasy. Understanding their views helps shape how artificial intelligence is introduced into real clinics, where both patient safety and trust are on the line. 
Who was asked and what they already do
The researchers surveyed 120 people across five university hospitals in Bavaria: 70 medical students, 36 physicians, and 14 members of administrative staff. Many respondents already use large language models in their daily work or studies, especially students and doctors. They turn to these tools to search the literature, generate ideas, translate texts, draft e‑mails and reports, summarize long documents, and clarify unfamiliar concepts. Administrative staff use them less often but show interest in help with speech transcription and document handling. At the same time, a significant share—about a quarter of students and doctors and a third of administrators—report not using such tools at all, and many students feel their understanding of the technology lags behind that of their peers.
What people see as useful
When asked which future uses would be most relevant, respondents across all groups highlighted help with translating medical reports and turning spoken language into written text. They also valued automatic drafting of clinical reports, summarizing long documents, and simplifying technical language so that patients can understand it more easily. In contrast, more complex roles—such as suggesting diagnoses or providing detailed medical reasoning—were rated as less important, especially by administrative staff. Answering patient questions directly was the least attractive idea, yet a majority said they would be comfortable letting a chatbot draft responses during a crisis, as long as a human expert reviewed the answers first. This pattern suggests that professionals welcome support with routine text-heavy tasks but want to keep tight human control over clinical decisions and communication.
How big a change they expect
Most participants believe language models will have a positive effect on their field, and many already sense a noticeable impact today or expect one within the next decade. They anticipate that automating repetitive paperwork could free up time for direct patient care and support more evidence-based, personalized treatment, potentially making care more cost‑effective. Opinions are more mixed about how much the technology will reshape staffing needs. Some foresee fewer workers being required, especially in administrative roles, but half of respondents think overall workforce needs will stay about the same. Standards for accuracy also depend on the task: for early screening by non‑specialists, respondents are willing to accept performance near the level of an average doctor, but for tools that guide treatment decisions for trained physicians, they expect clearly superior performance.
What worries them most
Despite the optimism, participants voiced strong worries. Doctors and students were most concerned about the “black box” nature of these systems: they cannot easily see how a conclusion was reached, yet must take responsibility for acting on it. They also feared threats to data privacy, given that medical records contain extremely sensitive information, and were uneasy about healthcare becoming too dependent on large technology companies. Students additionally worried about damage to the trust-based relationship between doctor and patient if machines appear to be making key decisions. Administrative staff were especially anxious about how automation might affect job security. Across all groups, there was a clear desire for tools that explain their reasoning, protect confidential data, and support rather than replace human judgment. 
How ready hospitals are—and what needs to change
The clearest warning signal from the survey is that most respondents feel their institutions are not ready for the introduction of language-model tools. Even though many already use such systems privately for work, they often do so without guidance, training, or approved infrastructure, which raises serious privacy and safety risks. When asked what should change, the most common request was education: courses and seminars that explain what these models can and cannot do and how to use them responsibly. Participants also called for investment in secure technical infrastructure, better digital records to replace handwritten notes, clear rules on legal responsibility, and closer cooperation with hospital IT departments. Many emphasized that systems must work well in German and integrate smoothly with existing hospital software.
What this means going forward
For a layperson, the main message is that many people inside hospitals already see chatbots and related tools as helpful assistants, especially for cutting down on tedious paperwork and improving how information is shared with patients. Yet they are equally aware of the dangers of rushing ahead without proper safeguards. The study suggests that if hospitals provide training, robust privacy protections, and well-integrated systems that keep doctors firmly in charge, language models could support more efficient and personalized care rather than replace human expertise. In other words, the future these professionals envision is not a “robot doctor,” but smarter tools that help human clinicians do their jobs better and more safely.
Citation: Vladika, J., Fichtl, A. & Matthes, F. Investigating expectations and needs regarding the use of large language models at Bavarian university clinics. Sci Rep 16, 10505 (2026). https://doi.org/10.1038/s41598-026-45245-2
Keywords: large language models in healthcare, medical AI adoption, hospital digital transformation, clinical staff attitudes, AI and patient privacy