Clear Sky Science · en
Knowledge graph–large language model fusion approach for emergency knowledge recommendation in gas tunnels
Why smarter tunnel safety matters
Gas tunnels are vital arteries for energy and transportation, but when something goes wrong underground, rescuers have only minutes to act. The knowledge they need—past accident reports, technical manuals, and emergency plans—is usually scattered across many documents and hard to search under pressure. This paper presents a new way to automatically collect, organize, and deliver that buried know‑how to responders, using a combination of advanced language models and network‑style knowledge maps. The goal is simple: turn messy text into clear, trustworthy guidance during a crisis.
Turning scattered documents into connected knowledge
Emergency teams rely on a wide mix of information: government guidelines, engineering reports, internal accident reviews, and more. The authors first assemble such sources into a custom dataset focused on gas tunnel incidents. Instead of asking experts to hand‑code rules or design rigid classifications, they use a large language model (LLM) as an intelligent reader. With carefully designed prompts, the LLM combs through the text, identifies key players (such as equipment, locations, hazards, and actions) and the links between them, and then expresses each finding as a simple three‑part fact. These facts become nodes and connections in a knowledge graph, a kind of map that shows how concepts in tunnel emergencies relate to each other.

How the system finds the right facts in seconds
When responders pose a question—for example, how to restore airflow after a sudden gas release—the system does more than keyword matching. It first detects important terms in the question and converts both the question and each fact in the graph into numerical vectors that capture meaning rather than just wording. Using fast similarity search, it pulls out the most relevant slices of the graph. A second step then reorders these candidate facts so that those containing exact matches to the user’s terms rise to the top. By limiting the final bundle to a manageable number of highly relevant facts, the system can both respond quickly and keep within the memory limits of the language model.
Teaching the model to answer with its "feet on the ground"
Once the right patch of the knowledge graph is found, it is translated back into short, readable descriptions and fed into the LLM alongside the user’s question. This setup, often called retrieval‑augmented generation, acts like handing the model a focused briefing pack before it speaks. The model is not retrained or fine‑tuned; instead, it stays frozen and is simply guided by up‑to‑date, traceable facts. This helps curb the well‑known problem of “hallucinations,” where language models confidently invent details. Here, the model’s answer is anchored in documented procedures and past cases drawn from the graph, and those sources can be inspected later for transparency.

Putting the approach to the test
To see if the method truly helps in emergencies, the authors created 50 realistic question‑and‑answer cases covering common tunnel crises: gas build‑up, fires, failed ventilation, trapped workers, and more. They compared their system with several strong language models, including a widely used commercial model, on both automated text‑matching scores and human ratings. Industry professionals and researchers judged each answer for accuracy, completeness, clear logic, and speed. The graph‑guided system not only matched real case decisions more closely but also produced more detailed and logically ordered steps than models working alone. Although it took slightly longer to respond, experts viewed this trade‑off as worthwhile in high‑risk situations where getting the right steps matters more than shaving off a second or two.
What this means for real‑world tunnel safety
For non‑specialists, the key message is that the study shows how artificial intelligence can move beyond generic advice and become a dependable assistant in very specific, dangerous settings. By fusing a knowledge graph with a language model, the authors build a system that reads and organizes large volumes of technical material, then delivers grounded, step‑by‑step recommendations when a gas tunnel accident occurs. While the approach still depends on the breadth of available data and could be refined for faster, interactive use, it points toward future emergency tools that are both smart and explainable—helping human decision‑makers act faster and with greater confidence when lives are on the line.
Citation: Xu, N., Chen, X., Luo, J. et al. Knowledge graph–large language model fusion approach for emergency knowledge recommendation in gas tunnels. Sci Rep 16, 11438 (2026). https://doi.org/10.1038/s41598-026-39204-0
Keywords: gas tunnel safety, emergency decision support, knowledge graph, large language models, retrieval-augmented generation