Scientists exploring methods for detecting gunshots from illegal rainforest poaching have combined neural networks and acoustic sensing technology to differentiate between natural sounds and gunshots.
The Cornell University-led team behind the novel approach said their process can improve gunshot identification and help authorities and conservationists locate poachers and track their movement patterns. The team also hopes to evolve their technology to work across a variety of environments where distinguishing naturally occurring sounds from gunshots could offer value.
According to a statement from the Acoustical Society of America detailing the team’s work, acoustic sensors designed to detect gunshots have already been placed throughout rainforests. In theory, these acoustic sensor webs can identify gunshots associated with poaching activities and point law enforcement and conservation authorities toward the culprits. In practice, acoustic sensors have struggled to distinguish natural sounds in the rainforest from actual gunshots.
For example, the sensors can easily differentiate bird chirps or whistles from human-made sounds. However, when the sensors pick up the sounds of branches cracking, trees falling, or water dripping, the researchers said they can often “conflate” them with gunshots. The issue has limited their effectiveness and stymied potential efforts to expand acoustic gunshot identification to other locations and environments.
“The belly of the rainforest is loud, and sorting through a constant influx of sound data is computationally demanding,” the research team explained.

Hoping to improve on the technology, Naveen Dhar, along with collaborators from Cornell University’s K. Lisa Yang Center for Conservation Bioacoustics and Elephant Listening Project, tapped into the computing power of modern neural networks. According to the team’s statement, this meant creating a portable device that could accompany the sensors and process acoustic signals at their source to help operators quickly separate false-positive signals from actual gunshots that could indicate poaching “in real time.”
The team’s proposed model works by collecting audio signals from autonomous recording units (ARUs) already used by researchers and conservationists to capture and record long-term soundscapes. Because these sensitive microphones can be used for extended periods of observation, they are designed to be highly power-efficient.
In Dhar’s proposed model, a web of energy-efficient ARUs is strategically placed across a forest where poaching is known to occur. These individual listening outposts are connected to a central hub that handles more complex processing tasks.
When an ARU picks up a sound that it flags as “gunshot likely,” that data is relayed to the local device’s built-in microprocessor. Because the team’s proposed neural network approach is designed to operate on site, the potential gunshot is analyzed and either filtered out as something else or flagged as a gunshot before connecting to the central hub.
According to the study authors, only sounds identified as gunshots are passed from the ATRU to the central hub, “initiating data collection from other devices in the web.” The added data available only to the central hub allows the system to make a final determination on whether the suspect sound is a false positive or a true gunshot.
In the final step of Dhar’s proposed model, the central hub collects and stores the audio files from each sensor. The Cornell-led team said this information can be used to pinpoint the location of the gunshot and “alert rangers with coordinates for immediate poaching intervention.”
Although the model is still in the testing and development stages, Dhar said they are working to give their neural network the ability to detect the type of gun fired based on its acoustic signature. The team said this data could ultimately help their device differentiate between different gunshot origins and further distinguish them from other activities commonly found in forested environments, such as chainsaws or trucks. If successful, they hope to field-test their approach in a real-world setting.
When discussing possible applications, Dhar said he envisions his device being used by rangers and conservation managers as a tool for “on-the-ground” intervention when they need an accurate, verifiable alert of gunfire. He also said the system could provide users with “low-latency data on the spatiotemporal trends of poachers,” to help allocate conservation and prevention resources most effectively.
Although designed for counter-poaching activities, the researchers noted that a system capable of detecting and localizing gunshots could be adapted to other data sources to operate in a broader range of environments.
“I hope the device can coalesce with Internet of Things infrastructure innovations and cost reduction of materials to produce a low-cost, open-source framework for real-time detection usable in any part of the globe,” Dhar explained.
Dhar will present his model on Tuesday, Dec. 2, at 3 p.m. HST as part of the Sixth Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, running Dec. 1-5 in Honolulu, Hawaii.
Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.
