Clear Sky Science · en
Attitudes of medical and life sciences university students and postdoctoral fellows toward AI chatbots in education: an international cross-sectional survey
Why this matters for students and teachers
As chatbots like ChatGPT quickly move from science fiction to everyday study tools, universities are scrambling to figure out what they mean for learning and fairness in the classroom. This article reports on a large international survey of medical and life sciences students and postdoctoral fellows, exploring how they actually use these tools, what they find helpful, and what worries them. The findings offer a rare, ground-level look at how tomorrow’s doctors, scientists, and health professionals think about artificial intelligence in their education.

Who took part in the survey
The researchers ran an anonymous online questionnaire between February and March 2024. They reached potential participants in two main ways: by contacting authors of recently published biomedical research papers, and by asking program administrators at top-ranked English-speaking universities to share the survey with their students and fellows. In total, 1,209 eligible responses came in from 73 countries. Most respondents were women, and many were in doctoral or master’s programs, with others in undergraduate, professional health degrees, or postdoctoral positions. All were studying in medical or life sciences fields such as biology, medicine, nursing, dentistry, rehabilitation, or pharmacy.
How students are already using chatbots
Most participants said they were familiar with the idea of artificial intelligence chatbots, and ChatGPT was by far the most commonly used tool. Many had used chatbots for study-related purposes, such as learning a new topic, making sense of difficult concepts, or managing routine tasks like drafting emails and organizing schedules. When asked how helpful these tools were, a large majority rated them as helpful or very helpful for exploring unfamiliar material, simplifying complex ideas, and taking care of administrative chores. However, students were more skeptical about using chatbots for hands-on scientific work like planning lab experiments or carrying out independent research projects, where they felt the tools added less value.
Opportunities students see
Across the board, respondents highlighted several clear advantages of AI chatbots. They appreciated being able to get explanations and support at any time of day, without waiting for office hours or email replies. Many believed chatbots could help them explore side interests outside their main degree program, broaden their knowledge, and generate ideas for creative or academic projects. Students also saw chatbots as a potentially low-cost way for universities with limited resources to provide additional learning support, including help with language, writing, and coding. Most expected these tools to become important, even essential, for future generations of university students.

Concerns about trust, fairness, and overuse
Despite this optimism, respondents voiced strong concerns. Many doubted that chatbots consistently provide accurate or reliable information, especially on specialized scientific topics, and worried about hidden errors or made-up references. Academic integrity loomed large: students reported that some instructors openly encourage chatbot use for brainstorming or drafting, while others ban it outright. Many were unsure what their institution’s rules actually were, or whether any formal policies existed. They also feared that heavy reliance on chatbots could weaken critical thinking, encourage laziness, deepen inequalities between students who do and do not have access to advanced tools, and blur the boundaries of what counts as “one’s own work.”
What students want from universities
One of the clearest signals from the survey was a desire for guidance. Most participants said students need at least some training to use chatbots effectively and responsibly, and over four out of five were interested in such training themselves. Yet the majority reported that their institutions neither integrate chatbots formally into teaching nor provide instruction on their use; many were also unaware of any AI-related rules intended to protect academic honesty. In open-ended comments, students called for clearer, evolving policies, better communication, and teaching that helps them benefit from chatbots while staying alert to errors, bias, and ethical pitfalls.
What this study tells us
Overall, the study paints a picture of a student body that is curious and enthusiastic about AI chatbots, but also cautious and uneasy. For many medical and life sciences learners, these tools already serve as round-the-clock tutors, writing assistants, and productivity aids. At the same time, students are acutely aware that answers can be wrong, that rules about acceptable use are unclear, and that dependence on chatbots could erode core skills and academic integrity. The authors conclude that universities will need to rethink how they design courses, assignments, and policies so that AI chatbots are used in ways that genuinely support learning, protect fairness, and uphold the values of higher education.
Citation: Ng, J.Y., Shah, A.Q., Roni, E. et al. Attitudes of medical and life sciences university students and postdoctoral fellows toward AI chatbots in education: an international cross-sectional survey. Sci Rep 16, 9089 (2026). https://doi.org/10.1038/s41598-026-42085-y
Keywords: AI chatbots in education, university students, medical and life sciences, academic integrity, ChatGPT use