Clear Sky Science · en
Automated diagnosis of plus form and early stages of ROP using deep learning models
Why tiny eyes and smart computers matter
Every year, thousands of premature babies are at risk of losing their sight because the blood vessels in the back of their eyes do not grow normally, a condition called retinopathy of prematurity (ROP). Catching this problem early can save vision, but it requires frequent eye exams by highly trained specialists—experts who are in short supply in many parts of the world. This study explores how modern artificial intelligence (AI) can help doctors spot early warning signs in retinal photographs, potentially bringing expert-level screening to hospitals and clinics that lack specialist eye care.

The problem: fragile vision in the tiniest patients
ROP develops when premature birth interrupts the normal growth of blood vessels in the retina, the light-sensitive layer at the back of the eye. Babies born very early or with very low birth weight are at the highest risk. In mild cases, the eye recovers on its own. In severe cases, abnormal vessels can pull on the retina and cause permanent blindness. Worldwide, ROP blinds an estimated 50,000 people, especially in regions where neonatal care has improved survival but eye screening programs and specialists have not kept pace. Current screening is labor-intensive, costly, and subjective: two experts can sometimes disagree about how severe a babys disease really is.
What doctors look for: twisted vessels and early stages
Eye doctors judge ROP using two main cues in retinal images. One is the overall stage of the disease, from Stage 0 (no visible changes) through the early problem stages (13). The other is Plus disease, a warning sign where blood vessels on the retina become unusually dilated and twisted. Plus disease signals a higher risk of serious damage and often triggers treatment such as laser therapy or drug injections. Evaluating these features by eye is challenging, especially when images are blurry or when infants need repeated exams week after week. A system that could automatically flag Plus disease and estimate the stage of ROP from images alone would be a powerful support tool for clinicians.
How the AI sees: tracing vessel maps from eye photos
The researchers built a two-step AI pipeline using more than 6,000 retinal images from 188 infants. First, they trained a neural network to draw a precise vessel map of each retina, highlighting every visible blood vessel, even the finest branches. Among several competing image-processing models, a version called U-Net++ worked best at capturing detailed vessel patterns, especially in noisy or low-contrast images. To improve clarity, the team enhanced each photo with contrast-boosting filters and noise reduction before segmentation. For Plus disease detection, they then fed only the vessel mapsnot the full color photosinto a second neural network, because Plus disease is defined almost entirely by vessel thickness and curvature.

Teaching the network to grade disease severity
For judging ROP stage, the AI needed more than blood vessel shape alone. The system therefore combined the original color retinal images with their corresponding vessel maps, giving the model both the overall view of the retina and a sharpened look at its vessels. The team tested several well-known deep learning backbones and found that a model called EfficientNetB4 offered the best balance of accuracy and efficiency. On held-out validation images, the Plus disease detector reached an accuracy of 99.6 percent, while the stage classifier achieved 98 percent accuracy across Stages 0 through 3. Additional checks, including precisionrecall curves and receiver-operating-characteristic curves, showed that the model maintained high sensitivity (rarely missing disease) and high specificity (rarely raising false alarms), even though Plus disease was much rarer than normal images.
Looking inside the black box
Because clinicians must trust any tool that influences treatment decisions, the authors probed how their AI made its choices. Using visualization methods such as t-SNE, they showed that images from different classes (for example, Plus vs. Normal or Stage 1 vs. Stage 3) formed well-separated clusters in the models internal feature space. With heat-map techniques called Grad-CAM, they highlighted which parts of each retina most strongly influenced a prediction. For Plus disease, the model focused on areas where vessels were abnormally wide or twisted, matching what experts look for. For stage grading, it also paid attention to other regions such as the optic disc and macula, suggesting that its reasoning aligned closely with established medical criteria rather than spurious image artifacts.
What this means for babies and clinics
In plain terms, this work shows that a carefully designed AI system can read retinal images of premature infants with near-expert accuracy, both to detect dangerous vessel changes and to judge how far the disease has progressed. The study was conducted at a single medical center and included only early to moderate stages, so larger multi-hospital trials and data from more advanced cases are still needed. Yet the results suggest that, with further validation and careful integration into telemedicine platforms, such tools could help overburdened health systems screen many more infants, more consistently, and at lower cost. That could mean earlier treatment and a better chance of preserving vision for some of the most vulnerable patients in neonatal care.
Citation: Vahidmoghadam, M., Ghorbani, P., Ahmadi, M.J. et al. Automated diagnosis of plus form and early stages of ROP using deep learning models. Sci Rep 16, 7234 (2026). https://doi.org/10.1038/s41598-026-37064-2
Keywords: retinopathy of prematurity, artificial intelligence, deep learning, medical imaging, neonatal eye disease