Abstract:
The integration of sensing, processing, and sound emission within IoT devices has enabled efficient and economical deployment of intelligent audio sensors in urban environments. Devices like the Audio Intelligence Monitoring at the Edge (AIME), deployed in Singapore, operate continuously, adapting to varying conditions and complementing CCTV systems by providing real-time aural data. This data facilitates effective sound management strategies.
This presentation explores the requirements for intelligent sound sensing, leveraging deep learning to extract critical insights such as noise type, source direction, sound pressure, and event frequency. We will introduce deep-learning-based techniques for active noise control, including reducing residential noise intrusion and generating 'acoustic perfumes' to mask unwanted urban sounds, all within an edge-cloud framework. This soundscape intervention device developed in Singapore, designed to enhance urban soundscapes by dynamically sensing ambient noise, selecting natural sound maskers, and adjusting in real-time via ambisonic loudspeakers. The system, driven by AI, minimizes manual oversight while providing adaptive soundscapes. To support this, the Affective Responses to Urban Soundscapes (ARAUS) dataset has been created to benchmark models predicting soundscape perception.
This presentation aims to demonstrate how deep learning can advance acoustic sensing and noise mitigation, addressing current challenges and paving the way for more adaptive urban sound management solutions.