Methodological advances for Audio Augmented Reality and its applications
As part of the project HAIKUS (ANR-19-CE23-0023), funded by the french national research agency, IRCAM, the Laboratoire lorrain de recherche en informatique et ses applications (Loria) and the Institut Jean Le Rond d'Alembert (IJRA) are organizing a one-day workshop focusing on methodological advances for Audio Augmented Reality and its applications.
Audio Augmented Reality (AAR) seeks to integrate computer-generated and/or pre-recorded auditory content into the listener's real-world environment. Hearing plays a vital role in understanding and interacting with our spatial environment. It significantly enhances the auditory experience and increases user engagement in Augmented Reality (AR) applications, particularly in artistic creation, cultural mediation, entertainment and communication industries.
Audio-signal processors are a key component of the AAR workflow, as they are required for real-time control of 3D sound spatialisation and artificial reverberation applied to virtual sound events. These tools have now reached a level of maturity, capable of supporting large multichannel loudspeaker systems as well as binaural rendering on headphones. However, the accuracy of the spatial processing applied to virtual sound objects is essential to ensure their seamless integration into the listener's real environment, thereby guaranteeing a high-quality user experience. To achieve this level of integration, methods are needed to identify the acoustic properties of the environment and adjust the spatialization engine's parameters accordingly. Ideally, such methods should enable automatic inference of the acoustic channel's characteristics, based solely on live recordings of the natural, and often dynamic, sounds present in the real environment (e.g. voices, noise, ambient sounds, moving sources). These topics are gaining increasing attention, especially in light of recent advances on data-driven approaches within the field of acoustics. In parallel, perceptual studies are conducted to define the level of requirements needed to guarantee a coherent sound experience.
Panel list and talks
Antoine Deleforge, Inria
François Ollivier, Institut Jean Le Rond d'Alembert - Sorbonne University
Annika Reinhardt, University of Surrey, UK
Sebastian Schlecht, University of Erlangen–Nuremberg, Germany
Cagdas Tuna, Fraunhofer IIS, Germany
Toon van Waterschoot, KU Leuven University, Belgium
Olivier Warusfel, IRCAM