Clear Sky Science · en

Explainable machine learning prediction of tracheostomy after craniotomy for supratentorial intracerebral hemorrhage

· Back to index

Why this matters for patients and families

When someone suffers a severe type of stroke caused by bleeding in the brain, they often need emergency brain surgery and a breathing machine. One of the hardest bedside decisions for doctors and families is whether the patient will later need a hole in the neck, called a tracheostomy, to help them breathe for a longer time. This study shows how an explainable form of artificial intelligence can estimate that risk early, using just a handful of routine medical measurements, and turn it into a simple bedside tool.

Serious brain bleeding and breathing support

Bleeding in the upper part of the brain, known as supratentorial intracerebral hemorrhage, is one of the most dangerous kinds of stroke. Many of these patients in China and elsewhere undergo open brain surgery to remove the blood and relieve pressure. Even after surgery, nearly half will end up needing prolonged breathing support, and a large fraction receive a tracheostomy to make ventilation safer and more comfortable. Until now, doctors had to rely on experience and broad stroke scores that were not tailored to this specific surgical group, making it difficult to give clear guidance to families or to plan intensive care resources.

Figure 1
Figure 1.

Turning routine data into a prediction tool

The researchers gathered data from two hospitals on 924 adults who had this kind of brain bleed and then underwent surgery. They looked at basic information that is already collected in the first day of care: age, level of consciousness on the Glasgow Coma Scale, how large the blood clot in the brain was, how long the operation took, and a common blood chemistry value called serum bicarbonate, which reflects the body’s acid–base balance. Using a stepwise selection approach, they found that these five factors carried most of the useful information about whether a patient would later receive a tracheostomy.

How the explainable AI model works

With these five pieces of information, the team trained three different computer models: a standard statistical model, a random forest, and a more advanced tree-based method called extreme gradient boosting. They carefully tuned each model and tested them using cross-validation on the main hospital’s data, then checked performance again on the second hospital’s patients. All three methods were good at telling high-risk from low-risk patients, but the gradient boosting model offered the best balance between accuracy and reliability of its risk estimates, and it provided the greatest potential clinical benefit when the results were judged the way a doctor would actually use them at the bedside.

Making the black box transparent

A common worry with artificial intelligence in medicine is that it behaves like a black box, giving answers without reasons. To avoid this, the researchers used a technique called SHAP that breaks down each prediction into contributions from the five inputs. Across the whole group, lower consciousness scores and larger brain hematomas were the strongest drivers of needing a tracheostomy, followed by older age, longer surgery time, and lower bicarbonate levels. For individual patients, the tool can display a simple bar-like picture showing how each factor pushed their personal risk up or down, giving clinicians and families an intuitive explanation instead of just a number.

Figure 2
Figure 2.

From research model to bedside tool

To make their work usable in everyday practice, the authors built a free web-based calculator. A clinician can enter a patient’s age, Glasgow Coma Scale score, hematoma size, operation length, and bicarbonate level, and the tool returns an estimated probability that the patient will need a tracheostomy, along with a visual explanation. Although the study has limits—it is retrospective, from one region, and does not yet include longer-term changes in the patient’s condition—it shows that a small, transparent set of factors can capture much of the risk.

What this means for care decisions

In plain terms, the study concludes that an explainable AI model using just five routine measures can give a trustworthy early estimate of whether a patient who has had brain surgery for a severe bleed is likely to need a tracheostomy. This kind of tool cannot replace medical judgment, but it can support more informed conversations with families, help plan staffing and equipment in intensive care units, and guide future trials on when tracheostomy should be performed. With further testing in other hospitals, it could become part of standard care for some of the sickest stroke patients.

Citation: Qiao, F., Xue, X., Yu, H. et al. Explainable machine learning prediction of tracheostomy after craniotomy for supratentorial intracerebral hemorrhage. Sci Rep 16, 11495 (2026). https://doi.org/10.1038/s41598-026-41953-x

Keywords: stroke, tracheostomy, brain surgery, machine learning, intensive care