When children and young adults need braces, one common problem is that the upper jaw is too narrow. Dentists can gently widen the upper jaw with special devices, but the safest and least invasive method only works while a seam of bone in the roof of the mouth—the midpalatal suture—is still open. Once this seam closes, heavier hardware or even surgery may be needed. This study provides a carefully built image dataset and an artificial intelligence (AI) model that help dentists see how far that seam has fused, aiming to guide treatment decisions more accurately and consistently.
The hidden seam in the roof of the mouth
The midpalatal suture is a natural joint that runs along the center of the palate, allowing the upper jaw to grow wider during childhood and adolescence. When the upper jaw is too narrow, doctors often use rapid maxillary expansion, in which a device slowly pushes the left and right halves of the upper jaw apart. If the suture is still open, a tooth‑anchored expander is usually enough. But if the suture has already fused into solid bone, doctors must turn to bone‑anchored expanders or surgery, which are more complex and invasive. As a result, knowing whether this seam is open or closed has a direct impact on comfort, risk, and cost for patients.
The challenge of reading 3D dental scans
Today, specialists usually judge the suture’s maturity by eye on cone‑beam CT (CBCT) scans, placing each patient into one of five stages from A (clearly open) to E (fully fused). This popular system, proposed by Angelieri and colleagues, helps guide treatment choices, but it has drawbacks. Different clinicians may disagree when the images are subtle, and 2D slices can miss important 3D details. The authors highlight that visual inspection is time‑consuming, subjective, and especially tricky in borderline cases. At the same time, earlier AI attempts often used only thin cross‑sections of the scan instead of the full 3D volume, risking loss of crucial information.
Building a rich, carefully checked dataset
To address these issues, the team collected 600 CBCT scans from patients aged 4 to 25, carefully removing all personal identifiers. An experienced orthodontist first created a personalized image of each patient’s palate by tracing the true curve of the roof of the mouth instead of using a flat cut, ensuring that the entire suture was visible. Then, an orthodontist and a maxillofacial imaging specialist independently assigned each case to one of the five standard stages, repeating their judgments a month later. Statistical checks showed excellent agreement both between the two experts and within each expert over time, giving confidence that these stage labels are reliable. Alongside the images, the researchers recorded clinical information such as age, sex, dental maturity, neck‑bone maturity, palate shape, and bone density around the suture.
How the AI sees patterns humans might miss Figure 1.
Using this dataset, the authors built an AI model that combines two kinds of information: the 3D CBCT image and the accompanying measurements in table form. A three‑dimensional convolutional neural network learns patterns from the full scan volume, while a simpler network handles the clinical numbers such as age and bone density. These two streams are then fused into a single representation used to predict the suture stage. To help ensure fairness and robustness, the team repeated training with several random splits of the data. The combined, or “fusion,” model consistently outperformed versions that used only images or only clinical data, showing that both anatomy and patient context matter. Overall, the model achieved high accuracy, and its ability to distinguish between stages was reflected in area‑under‑the‑curve values above 0.95 for all classes.
Peeking inside the model’s reasoning Figure 2.
To understand what the AI was focusing on, the researchers generated heatmaps using a method called Grad‑CAM. These visual overlays highlighted the regions of the upper jaw and palate that contributed most strongly to the model’s decisions, clustering around the midpalatal suture and nearby bone. This gives clinicians reassurance that the AI is basing its judgment on anatomically meaningful features rather than irrelevant image artifacts. At the same time, the authors noted signs of overfitting—where the model learns the training data too well and may not generalize perfectly to new clinics or scanners—emphasizing the need for larger, multi‑center datasets and further refinement.
What this means for future orthodontic treatment
For patients and families, the practical promise of this work is more consistent decisions about when simple expanders are enough and when stronger or surgical options are truly necessary. By making both the 3D images and the clinical table publicly available, along with the code, the authors invite other groups to build on and test their system. If validated across different populations and machines, AI‑assisted staging of the midpalatal suture could turn a difficult, experience‑dependent judgment into a standardized tool, reducing guesswork and helping tailor jaw‑widening treatments to each individual’s true stage of bone development.
Citation: Zuo, Z., Jia, B., Xiao, Y. et al. A dataset of midpalatal suture maturation stage in cone-beam computed tomography.
Sci Data13, 531 (2026). https://doi.org/10.1038/s41597-026-06778-3
Keywords: midpalatal suture, orthodontic expansion, cone beam CT, medical imaging AI, bone growth stages