Clear Sky Science · en
Explainable AI (XAI) for transparent resource allocation in public safety communications networks
Why smarter emergency radios matter
When a major storm, wildfire, or citywide accident strikes, hundreds of police officers, firefighters, and medics suddenly compete for the same limited radio and data channels. If these communication lifelines are overloaded or unfairly shared, people can be put at risk. This paper explores a new way to use artificial intelligence to manage those scarce communication resources in public safety networks—but in a form that emergency agencies can see into, question, and trust.
How emergency networks juggle many urgent voices
Public Safety Networks are the specialized radio and data systems that keep first responders connected during crises. In these moments, demand for bandwidth spikes, conditions change from minute to minute, and different users have very different levels of urgency. Traditional methods rely on fixed rules or heavy optimization software that struggles when the situation shifts quickly. Newer AI-based systems can adapt on the fly, but often operate as black boxes, offering no clear reason why one ambulance got priority over another patrol car. That lack of transparency can undermine trust, make it hard to spot hidden bias, and complicate later reviews of what went right or wrong.

Opening the black box of AI decisions
The authors propose a framework called SLIRA that makes AI-driven resource allocation both efficient and explainable. Instead of simply telling the network how to split bandwidth, the system always produces two things together: a recommendation and an explanation of what drove that recommendation. It does this using two widely studied explanation tools. One, known as SHAP, gives a “big-picture” view of which factors—such as user demand, mission urgency, or network congestion—generally matter most across the whole system. The other, called LIME, zooms in on individual decisions, showing why a specific user at a specific moment was treated a certain way.
Turning explanations into a steering wheel
Rather than adding explanations after the fact, SLIRA builds them into the heart of the decision process. At each time step, a predictive model converts the current network state—who needs what, how urgent it is, and how good their connections are—into a table of “desirability scores” for assigning each resource to each user. SHAP and LIME then analyze these scores and fuse their insights into a single guidance signal. This signal nudges the allocations over time, pushing them toward patterns that remain understandable, stable, and fair, instead of chasing short-term gains that might be hard to justify later. In parallel, fairness rules check that no group of users is systematically favored or neglected, not just in one moment but across extended operations.
Building in caution through uncertainty and fairness
Disasters are messy, and the data describing them is often noisy or incomplete. To cope with this, SLIRA adds a layer of Bayesian uncertainty modeling, which attaches a sense of confidence to both the AI’s decisions and its explanations. In practice, this lets operators know when the system is sure of its choices and when it is effectively “hedging its bets” because conditions are unclear. The framework also monitors how explanations change over time; sudden, unexplained swings in what the AI claims is important can signal unstable behavior or even potential attacks on the system. By keeping explanations concise and focusing on the most influential factors, SLIRA aims to be something a human decision-maker can realistically digest during a fast-moving event.

Putting the framework to the test
To see how this approach performs, the authors simulate realistic emergency communication scenarios with fluctuating traffic, mixed responder roles, and protected user groups for fairness checks. They compare SLIRA against several alternatives: an ideal mathematical solver, simple rule-based methods, and standard AI systems with and without after-the-fact explanations. While the exact solver achieves slightly higher raw efficiency in static, perfectly known settings, it is slow and offers no insight into its choices. SLIRA, by contrast, comes within about 1–2 percent of that optimal efficiency, but cuts fairness gaps by more than 40 percent and greatly improves the stability of decisions over time—all while running fast enough for real-time use.
What this means for future emergency response
For non-specialists, the key takeaway is that this research shows how AI can be designed not just to squeeze more performance out of critical communication networks, but to do so in a way that is understandable, auditable, and fair. In SLIRA, explanations are not cosmetic add-ons; they actively shape how the system behaves, helping ensure that scarce radio and data resources are shared in a way that can be defended to responders, regulators, and the public. If developed further and tested with real-world data, such explainable allocation systems could help emergency services react faster and more equitably when lives are on the line.
Citation: Alammar, M., Al Ayidh, A., Abbas, M. et al. Explainable AI (XAI) for transparent resource allocation in public safety communications networks. Sci Rep 16, 14180 (2026). https://doi.org/10.1038/s41598-026-43440-9
Keywords: public safety networks, explainable AI, resource allocation, algorithmic fairness, emergency communications