Clear Sky Science · en
Challenges in applying the EU AI act research exemptions to contemporary AI research
Why New Rules for AI Matter to Everyone
Artificial intelligence is rapidly moving from research labs into hospitals, banks, schools, and public agencies. To keep people safe while still encouraging innovation, the European Union has created the world’s first broad law focused entirely on AI, known as the EU AI Act. This paper looks closely at a seemingly narrow but crucial corner of that law: the special carve‑outs for research. The authors argue that, in the real world of modern AI, the line between “just research” and “real‑world use” is blurry, and that this fuzziness could either stifle useful science or open the door to risky applications that slip past safeguards.

How the New AI Law Tries to Protect Us
The EU AI Act sets out a wide‑ranging framework that covers most players involved in building, selling, and using AI systems, even if they operate outside Europe but still affect people in the EU. Within this broad scope, the law carves out two key exemptions for research. One applies when AI systems are still under development and have not yet been put on the market or used for their intended purpose; the other applies to finished systems that are designed and used only for scientific research. On paper, these carve‑outs aim to keep red tape from choking off experimentation, while ensuring that AI used in everyday life meets strict requirements on safety, transparency, and respect for fundamental rights.
Where the Lab Ends and Real Life Begins
The first exemption, for the development phase, assumes a clear divide between lab testing and real‑world use. The law says that activities before a system is “placed on the market” or “put into service” fall outside its scope, but explicitly excludes testing in real‑world conditions from this safe zone. That sounds straightforward until we consider common AI practices: for example, running a prototype quietly in a hospital, collecting live data without showing doctors any output. Is that still “lab” work or already “real‑world testing”? The authors explain that the answer hinges on the system’s intended purpose. If the hidden system is being trialed for diagnosis, it likely counts as real‑world testing and should trigger the law’s protections, including approvals, oversight, and strict time limits.

When Research and Business Overlap
The second exemption, for scientific use, tries to shield AI that is both developed and used solely for research. In practice, this requirement is hard to pin down. Modern science often unfolds through partnerships among universities, hospitals, companies, and public bodies. Tools built in a lab may later be turned into commercial products, or a company may sell a system that a university uses only for research. The paper walks through concrete scenarios showing how the wording of the law can lead to odd or unclear results—for example, a tool originally designed for patient care but ultimately used only for image analysis in a study. The authors warn that vague notions like “sole purpose” invite both honest confusion and strategic behavior, such as presenting a product as “research” to postpone compliance.
The Risk of Loopholes and Slowdowns
These grey zones matter because they shape who must follow the AI Act’s tougher rules. If definitions are too loose, some actors might quietly run near‑deployment tests on real people without the oversight the law intends, or shift parts of their pipeline to other countries to dodge obligations. If definitions are too strict or applied inflexibly, researchers—especially those in public or non‑profit settings—might be forced to shoulder heavy regulatory burdens even when there is no commercial angle and clear public benefit, such as in climate modeling or disease prediction. The authors argue that this tension between avoiding loopholes and avoiding unnecessary red tape runs through both exemptions and is heightened by the lack of a shared EU‑wide definition of “scientific research.”
What Needs to Change for Safer, Smarter AI
In the end, the paper concludes that the EU AI Act’s research carve‑outs rest on an outdated picture of research as something cleanly separated from real‑world impact and commercial interest. In contemporary AI, live data, pilot deployments, and mixed public‑private projects are the norm, not the exception. The authors call for clearer definitions of key ideas like “real‑world conditions” and “scientific research,” practical guidance for how and when an AI system crosses the line into regulated use, and stronger guardrails against misuse of exemptions. They argue that without such refinements, the law risks either undermining its own protections or pushing valuable research to less regulated regions—leaving people exposed to high‑risk systems that could have been better controlled if rules were more realistic and better aligned with how AI is actually developed today.
Citation: Meszaros, J., Huys, I. & Ioannidis, J.P.A. Challenges in applying the EU AI act research exemptions to contemporary AI research. npj Digit. Med. 9, 288 (2026). https://doi.org/10.1038/s41746-025-02263-0
Keywords: EU AI Act, AI regulation, research exemptions, digital medicine, AI ethics