A3EM: Animal-borne Adaptive Acoustic Environmental Monitoring
The primary objective of this project is to develop and deploy a novel, animal-borne adaptive acoustic monitoring system capable of long-term, real-time recording of ecologically significant sounds. Unlike traditional static acoustic sensors or fixed-event classifiers, our system dynamically identifies and retains rare or novel acoustic events using unsupervised learning techniques. This allows the creation of rich wilderness sound datasets while dramatically reducing redundant storage and power consumption.
Key goals of the project include:
- Designing ultra-low-power hardware suitable for deployment on diverse animal species, including those with strict weight constraints.
- Implementing configurable firmware and a lightweight AI pipeline that enables real-time, on-device event detection and filtering based on learned novelty.
- Developing a variational autoencoder-based architecture for projecting audio into a compact latent space, followed by online clustering to determine events worth preserving.
- Enabling flexible, modular deployments that support varied research goals, from studying animal behavior to detecting anthropogenic disturbances and supporting conservation.
- Validating the system through field deployments on caribou, African elephants, and bighorn sheep, while building an open repository of labeled wilderness acoustic data.
This work bridges the gap between advanced machine learning and embedded sensing to push the boundaries of scalable, autonomous wildlife monitoring.
This is a collaborative project between Vanderbilt University and Colorado State University. Team members:
- Devin Jean (Vanderbilt)
- Jesse Turner (CSU)
- Gyorgy Kalmar (University of Szeged)
- Saman Kittani (Vanderbilt)
- Gordon Stein (Vanderbilt)
- Will Hedgecock (Vanderbilt)
- George Wittemyer (CSU)
- Akos Ledeczi (Vanderbilt)
Check out the GitHub repo of the project.
As part of the project, we created a curriculum for a week-long summer camp that teaches introductory programming with a block-based language called NestBlox using projects related to wildlife conservation.
Effective wildlife monitoring is essential for understanding animal behavior, detecting ecological changes, and informing conservation strategies. Traditional acoustic monitoring approaches rely on static recording stations that offer limited spatial coverage, lack real-time adaptability, and often produce massive volumes of redundant data requiring offline analysis. These limitations severely restrict scalability and responsiveness, particularly in remote or dynamic environments.
Animal-borne acoustic sensors offer a promising alternative by allowing mobile, distributed sensing directly from the animals themselves. However, existing biologging solutions are hampered by strict power and storage constraints that limit deployment duration—especially for smaller species—and lack the intelligence to selectively capture meaningful acoustic events. Moreover, most AI-based solutions depend on labeled training data and are restricted to detecting predefined sounds, making them ineffective in unfamiliar wilderness environments where novel or unexpected events may hold the most ecological value.
This project addresses a critical unmet need: a modular, low-power, intelligent acoustic sensing platform that can adapt in real time to diverse acoustic environments and retain only the most informative data. By combining embedded unsupervised learning with highly efficient hardware and configurable firmware, the system enables long-term, scalable monitoring across species and habitats—something that has not previously been feasible. It also fills a significant data gap by generating high-quality, annotated recordings of natural soundscapes from an animal’s perspective, which are largely missing from existing public datasets.
Our approach integrates embedded artificial intelligence, low-power hardware, and flexible firmware into a unified animal-borne acoustic monitoring system optimized for long-term deployments in the wild.
Hardware Design:
We developed a custom sensor board measuring only 18 × 23 mm, weighing 2.4 g, and consuming extremely low power. It features the Ambiq Apollo 4 Plus microcontroller, which provides high-performance computing with minimal energy consumption—enabling real-time, on-device processing. The board supports multiple microphone types, external GPS and VHF integrations, and environmental sensors, allowing adaptability to species size, ecological context, and research goals.
Firmware Architecture:
Our firmware supports a wide range of user-configurable deployment strategies, including schedule-based, threshold-triggered, and intelligent event-based recording. A graphical dashboard allows researchers to set parameters such as microphone gain, sampling rate, activation method, and multi-phase deployment plans without needing programming expertise.
Adaptive Filtering and Machine Learning:
To intelligently select which sounds to store, we use an unsupervised learning pipeline built around a quantized Variational Autoencoder (VAE). Incoming audio is segmented into 1-second clips and converted into low-dimensional feature vectors using Mel-Frequency Cepstral Coefficients (MFCCs). These embeddings are clustered online using an efficient novelty-detection algorithm. Only acoustically novel or rare sounds are stored, reducing redundant data and extending deployment duration.
Optimization and Simulation:
Our system was simulated across multi-day synthetic datasets to balance information retention with power and storage consumption. Field-tunable thresholds ensure that rare or unexpected events (e.g., predator calls, poaching activity, novel vocalizations) are preserved, while common ambient noises are filtered out.
Field Validation:
Initial deployments on caribou, African elephants, and bighorn sheep are validating system durability, data quality, and adaptive filtering performance across species and habitats. These real-world trials guide ongoing refinements in hardware, firmware, and filtering strategies.
This project introduces several key innovations that together redefine the possibilities of wildlife acoustic monitoring:
1. On-Device, Unsupervised Learning:
Unlike conventional approaches that rely on predefined sound classes and extensive labeled datasets, our system uses an embedded variational autoencoder (VAE) to detect novel or rare acoustic events in real time. This unsupervised learning approach enables the system to adapt to any acoustic environment without prior knowledge or retraining, making it uniquely suitable for wilderness deployments where unexpected events are often the most important.
2. Adaptive Clustering for Resource Efficiency:
We introduce a lightweight, online clustering algorithm that runs directly on the device to identify redundant versus novel sounds. By storing only representative samples of common sounds and prioritizing rare events, the system reduces storage and power usage—unlocking significantly longer deployment durations.
3. Miniaturized, Low-Power Hardware Platform:
Our custom-designed biologging board is smaller, lighter, and more power-efficient than existing solutions like AudioMoth, while offering significantly more memory, compute capability, and flexibility. It supports multiple microphone types, real-time decision-making, and integration with GPS and other sensors, all within a sub-$60 cost footprint.
4. Fully Configurable Deployment Framework:
Through a novel three-tiered firmware architecture—including onboard configuration, editable SD-card runtime settings, and a graphical desktop dashboard—researchers can easily tailor deployments to different species, ecosystems, and research goals without writing code.
5. Creation of Novel Datasets:
By enabling dynamic detection and recording of ecologically significant sounds directly from animals in their natural environments, this system generates unprecedented datasets—addressing a major gap in current wilderness bioacoustic repositories and advancing machine learning research in underrepresented acoustic domains.
Together, these innovations offer a leap forward in scalable, intelligent, and species-agnostic acoustic monitoring for ecological research and conservation.
The project has produced several impactful outcomes across hardware development, algorithm design, and field validation:
1. Functional Animal-Borne Acoustic Monitoring Platform:
We successfully designed and built a lightweight, low-power, and highly configurable acoustic sensing board suitable for deployment on a wide range of animal species. The device is capable of real-time audio processing and event-driven recording, supporting both research and conservation use cases.
2. Embedded Adaptive Filtering Algorithm:
We developed and deployed a quantized variational autoencoder (Q-VAE) combined with an efficient online clustering algorithm that enables real-time detection of rare or novel acoustic events. Simulations show that the system retains 80–85% of rare sounds while reducing storage of common events to 3–10%, effectively balancing information retention with power and storage usage.
3. Field Deployments on Diverse Species:
The system has been successfully deployed on caribou in Alaska, bighorn sheep in Colorado, and African elephants in Kenya. These initial deployments provided critical real-world validation of system durability, acoustic performance, and usability across climates and behavioral contexts.
4. Power Optimization Insights:
Through profiling and testing, we identified SD card writing as the largest power consumer. In response, we optimized firmware and data handling to minimize unnecessary writes, and we plan further improvements via multi-stage filtering and memory buffering. These strategies are expected to extend deployment lifespans to months or even a year.
5. Contribution to Data Resources and Research Tools:
Collected audio from field deployments is being curated into a novel dataset of wilderness sounds, which can inform future supervised learning models in ecoacoustics. The project also delivers reusable tools, including a user-friendly configuration dashboard and open-source firmware framework.
These outcomes position the system as a scalable solution for long-term, adaptive acoustic monitoring in wildlife research and open new avenues for studying animal behavior, health, and environmental change.




