Researchers Develop “Hearing Car” to Enhance Safety for Autonomous Vehicles

Researchers at the Fraunhofer Institute for Digital Media Technology in Oldenburg, Germany, are pioneering a new technology designed to enhance the safety of autonomous vehicles. The project, dubbed the “Hearing Car,” aims to equip cars with external microphones and artificial intelligence (AI) to identify and classify sounds in the environment. This innovation will enable vehicles to respond to hazards that may be out of their visual range, such as approaching emergency vehicles or pedestrians.
In March 2025, researchers conducted a significant test of the Hearing Car by driving a prototype over 1,500 kilometers from Oldenburg to a proving ground in northern Sweden. This journey tested the system under various challenging conditions, including dirt, snow, slush, and freezing temperatures. “It’s about giving the car another sense, so it can understand the acoustic world around it,” explained Moritz Brandes, a project manager for the Hearing Car.
Innovative Sound Detection Technology
The Hearing Car employs a series of external microphones, which are crucial for sound detection. Each external microphone module (EMM) consists of three microphones housed in a 15-centimeter package. Located at the rear of the vehicle, where wind noise is minimal, these microphones capture audio, digitize it, and convert it into spectrograms that are analyzed by a region-based convolutional neural network (RCNN). If the system detects a siren, it cross-references this information with the vehicle’s cameras to confirm the presence of an emergency vehicle.
This integration of audio and visual data enhances the reliability of the vehicle’s responses and reduces the likelihood of false positives. Sound localization utilizes beamforming, a technique that allows the system to determine the direction of incoming audio signals. All processing occurs onboard the vehicle, which minimizes latency and addresses potential connectivity issues in areas with poor internet service. Brandes noted that the workload can be efficiently managed by a modern Raspberry Pi.
Initial benchmarks for the Hearing Car indicate that it can detect sirens from up to 400 meters away in quiet, low-speed environments. However, this distance decreases to under 100 meters at highway speeds due to increased wind and road noise. The system is designed to trigger alerts within approximately two seconds, allowing drivers or autonomous systems to react promptly.
The Journey of Developing the Hearing Car
The concept of vehicles that can ‘hear’ has been in development for over a decade. “We’ve been working on making cars hear since 2014,” Brandes said, noting early experiments included detecting a nail in a tire by its rhythmic tapping on pavement and enabling voice commands for trunk access. Support from a Tier-1 supplier and a major automaker accelerated the project’s progression toward automotive-grade development.
With the rise of electric vehicles (EVs), the automotive industry is recognizing the value of auditory perception. Eoin King, a mechanical engineering professor at the University of Galway, emphasized the importance of this technology: “A human hears a siren and reacts—even before seeing where the sound is coming from. An autonomous vehicle needs to do the same if it is to coexist safely with humans.”
Brandes reflected on a pivotal moment during testing when he failed to hear an emergency siren inside a well-insulated electric vehicle until it was nearly upon him. This experience underscored the necessity of audio detection in vehicles, especially as EV adoption continues to grow.
King, who directs acoustic research at his university, acknowledged the transformative potential of AI in this realm. “Machine listening is really the game-changer,” he stated, highlighting the shift from traditional physics-based approaches to AI-driven systems that can better generalize across various environments.
Despite these advancements, King remains cautious about the technology’s near-term adoption. He anticipates that it will initially appear in premium vehicles or autonomous fleets, with widespread integration taking more time. “Hearing technology will get there, but step by step,” he noted, citing lane-departure warnings as a precedent for how new technologies can gradually become standard.
As the researchers continue to refine their algorithms, they are also considering the challenges of ensuring the system does not trigger false alarms. “If you train a car to stop when it hears someone yelling ‘help,’ what happens when kids do it as a prank?” King asked, stressing the importance of thorough testing before the technology can be safely deployed on public roads.
Overall, both Brandes and King agree that integrating multiple sensory modalities—cameras, lidar, and microphones—is essential for the future of autonomous vehicles. “Autonomous vehicles that rely only on vision are limited to line of sight,” King explained. “Adding acoustics adds another degree of safety.”