Research Highlights

Multi-Zone Sound Reproduction

Multi-zone sound reproduction methods are capable to deliver personalized sound zones to individuals in a shared environment. This includes shared places such as exhibition centers, home environment, shared offices, private vehicles and music stores, etc. The state-of-the-art methods developed for multi-zone reproduction depends on manipulating the energies emitted from the loudspeaker arrays.

Illustration of a personal sound zones in an office environment (left), and loudspeaker array used to create multiple sound zones (right).

  • Contribution

In this context, our current research focuses on developing the multi-zone methods that preserves the spatial quality of the reproduced sound fields using limited resources. We have developed sparsity based frameworks both in spatial domain and spherical harmonics domain for simultaneous reproduction of multiple desired sounds using optimal set of minimal distributed loudspeakers. Additional, the multi-zone methods using directional loudspeakers are also developed for spatial sound reproduction and efficient resource utilization.


  • Future Prospects

In this domain, the future research is intended towards developing novel multi-zone methods that are robust to challenging and dynamic multi-zone conditions. This includes reverberant and noisy environments, multi-zone environments with moving listeners and sound reproduction in closely separate zones. In addition, robust multi-zone methods can be developed using deep learning frameworks.

Audio-Visual Processing in Augment and Virtual Reality

Augmented and Virtual Reality are the potential next-gen technologies delivering near-to-reality experiences in almost every field of life. This development is readily observed in the field of education, marketing, e-commerce, and event management delivering enhance virtual experience in terms of presentability and accessibility.

General layout of Interactive 3D Real-Time System

  • Contribution

In this context, the current research is focused on developing dynamic virtual reality systems using audio-visual processing. Spatial sound rendering based on HRTF are explored in virtual environment with aim to develop interactive 3-D real-time system using audio and visual scenes on virtual reality platform. Additionally, the prototypes of various object based learning applications using virtual reality and augmented reality are developed herein. Applications for voice based object control and speech to text conversion in virtual environments are also developed.

  • Future Prospects

The future research is focused to develop cross-platform AR/VR applications incorporating voice/speech based assistance. The specific focus is to introduce the cloud based multi-lingual support in AR/VR environments.