Clear Sky Science · en

An open-source deep learning-based toolbox for automated auditory brainstem response analyses (ABRA)

· Back to index

Why Better Hearing Tests Matter

As people live longer, hearing loss is becoming one of the most common health problems worldwide. It does more than make conversations harder; it is now linked to memory problems and dementia later in life. To understand and treat hearing loss, scientists rely on a special kind of electrical recording from the ear and brainstem called the auditory brainstem response, or ABR. These recordings are powerful but traditionally require experts to inspect squiggly lines by eye, a slow and subjective process. This paper introduces ABRA, a free, automated software toolbox that uses modern artificial intelligence to read these signals quickly and consistently.

Figure 1
Figure 1.

From Clicks in the Ear to Waves on a Screen

When a brief sound is played into the ear, tiny sensory cells in the inner ear and the nerve fibers that follow them fire in a rapid burst. This activity travels up the brainstem and can be picked up by electrodes placed on the head. The result is an ABR trace: a series of small waves that appear within the first few thousandths of a second after the sound. In mice, the first large wave reflects activity in the auditory nerve and is especially sensitive to early damage in the cochlea, including the loss of the connections (synapses) between nerve fibers and hair cells. Researchers read these waves to estimate how loud a sound must be to be heard (the threshold) and how strong and fast the nerve responses are. Small shifts in these measures can reveal hidden injury before it shows up on standard hearing tests.

The Problem with Manual Reading

Although ABR tests are objective recordings, the way they are analyzed often is not. Different laboratories use different software, and even experienced reviewers may disagree on exactly where a wave begins, peaks, and ends, especially when signals are weak or noisy. Manually marking thousands of traces from large experiments can take many hours and makes it hard to compare results across studies. Some groups have tried rule-based computer methods or traditional machine-learning techniques, but these have limited flexibility and may not cope well with the wide variety of recording settings, mouse strains, and types of hearing damage used in modern research.

A New Automated Toolbox

ABRA (Auditory Brainstem Response Analyzer) tackles these challenges by combining deep-learning models with a user-friendly interface. The authors trained convolutional neural networks, a type of artificial intelligence that excels at recognizing complex patterns, on more than twenty thousand ABR recordings collected from three independent hearing research laboratories. Despite differences in equipment, sound settings, and mouse models—including animals that aged faster or were exposed to loud noise—the same models learned to detect key features of the waves and to distinguish responses that truly reflect hearing from those that are just background noise. ABRA includes two main tools: one that pinpoints the timing and height of the first ABR wave, and another that decides, across a series of sound levels, which level marks the true hearing threshold.

Figure 2
Figure 2.

As Accurate as Experts, Dozens of Times Faster

To see how well ABRA works, the team compared its automatic measurements to thousands of human-made labels. For the first wave of the ABR, ABRA’s estimates of timing and size almost always fell within a tiny fraction of a millisecond and a fraction of a microvolt of expert markings, with the largest errors occurring only when the signal itself was barely visible. For threshold detection, several types of machine-learning models were tested, and the deep-learning approach outperformed simpler methods across all accuracy measures. In head-to-head comparisons, ABRA agreed with expert human raters about as often as two experts agreed with each other and matched or beat an established cross-correlation method used in another popular ABR package. Crucially, analyzing 90 sets of mouse data took experts about an hour by hand but only under a minute with ABRA, a speedup of roughly 75-fold.

What This Means for Hearing Research

ABRA turns what used to be a slow, subjective step in hearing experiments into a fast, standardized, and shareable process. Because it is open source, freely available online, and able to read multiple common file formats, it can be slotted into many existing workflows without special programming skills. The current models are trained on mouse data and best validated for the first ABR wave, so very unusual cases or other species may still require expert review or future retraining. Even so, the toolbox shows how artificial intelligence can make sense of complex biological signals, helping scientists study how hearing fails with age, noise, or disease—and ultimately supporting efforts to protect both hearing and brain health.

Citation: Erra, A., Miller, C.M., Chen, J. et al. An open-source deep learning-based toolbox for automated auditory brainstem response analyses (ABRA). Sci Rep 16, 9855 (2026). https://doi.org/10.1038/s41598-026-38045-1

Keywords: hearing loss, auditory brainstem response, deep learning, automated signal analysis, mouse neuroscience