This project investigates advanced active noise and vibration control (ANVC) systems aimed at reducing unwanted sound and vibration within spatially distributed regions of interest. The research addresses the growing demand for quieter, more personalised acoustic environments in applications ranging from consumer audio and transportation to built environments and industrial systems. A central focus is on spatial active noise control, where the objective extends beyond single-point noise reduction to the creation and expansion of controlled quiet zones within enclosed or semi-enclosed spaces.
The project advances adaptive and frequency-domain control techniques for multi-input multi-output (MIMO) active noise and sound field control systems. Key challenges such as convergence speed, stability, robustness, and control performance are studied in depth, with particular emphasis on the design of adaptive algorithms capable of operating reliably in complex acoustic environments. The research also explores frequency-domain system identification and optimal experiment design, analysing how different excitation signals influence the accuracy and variance of estimated frequency response functions. By explicitly characterising estimation uncertainty, noise contributions, and nonlinear effects, the project enables better-weighted control and modelling strategies for high-performance ANVC systems.
A further core contribution of the project lies in the modelling and analysis of acoustic source coupling in confined environments, where interactions between secondary sources and enclosure modes cannot be neglected. The work develops analytical and numerical frameworks to study coupled acoustic systems, revealing how loudspeaker–loudspeaker and loudspeaker–room interactions influence sound pressure levels and control effectiveness. These insights directly inform the design of robust MIMO control architectures, sensor–actuator layouts, and driving strategies, enabling scalable and physically grounded active noise and vibration control solutions for real-world deployment.
Recent technological advances in spatial active noise control systems [Video] [GitHub]
Optimal Input Excitation Design for Nonparametric Uncertainty Quantification of Multi-Input Multi-Output Systems [Video] [GitHub]
Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces [Video] [GitHub]
This project applies machine learning and digital signal processing (DSP) to multilingual speech processing, focusing on speech separation, enhancement, and robust transcription. We developed a dynamic, reproducible dataset combining clean single-speaker recordings with room impulse responses and background noise, simulating realistic acoustic environments with overlapping speakers across multiple distinct languages.
The dataset was used to benchmark state-of-the-art speech separation and enhancement systems against publicly available models, providing a reproducible evaluation framework across metrics such as SI-SDR, SDR, PESQ, STOI, and SNR. The separated and enhanced signals were then applied to improve the performance of encoder-decoder multilingual translation and transcription models, demonstrating the direct impact of front-end audio processing on downstream tasks.
To ensure reliability in critical applications, we also developed a framework to detect and rectify hallucinations, mitigating errors in outputs when audio is partially corrupted, overlapping, or noisy. By combining DSP techniques for signal conditioning with ML models for separation, enhancement, and downstream translation, the project delivers a robust, scalable, and multilingual speech processing pipeline suitable for real-world environments.
Potential applications include multilingual transcription and translation, assistive communication technologies, and real-time audio processing in noisy or overlapping speech conditions.