Acoustic and Cognitive Spaces
The main team’s scientific disciplines are signal processing and acoustics for the elaboration of spatialized audio reproduction techniques and methods for the analysis/synthesis of a sound field.
In parallel, the team devotes a large percentage of its time to cognitive studies on multisensorial integration for a rational development of new sonic mediations based on body/hearing/space interaction. The scientific activities described below are combined with the development of software libraries. These developments build on the team’s expertise, and its academic and experimental research activities and are the major vector of our relationship with musical creation and other application domains.
The work carried out concerning spatialization techniques are concentrated on models based on a physical formalism of the sound field. The primary objective is the development of a formal framework for the analysis/synthesis of the sound field using spatial room impulse responses (SRIR). The SRIRs are generally measured using spherical arrays featuring several dozen transducers (microphones and/or loudspeakers). The principal application concerns the development of convolution reverberators using these high spatial resolution SRIRs to faithfully reproduce the complexity of a sound field.
Binaural Spatialization techniques using headphones is also a focus of our attention. The evolution of listening practices and the democratization of interactive applications tend to favor listening with headphones through smartphones. Taking advantage of this sonic immersion, binaural listening has become the primary vector of tridimensional listening. Based on the exploitation of headrelated transfer functions (HRTFs), it is the only approach that currently ensures a precise and dynamic reconstruction of the perceptual cues responsible for auditory localization. It has become the reference tool for experimental research in connection with spatial cognition in a multisensorial context and for virtual reality applications.
These 3D audio spatialization techniques associated with a tracking system that captures the movements of a performer or a member of the audience, constitute an organological base essential for addressing the issues of "musical, sound, and multimedia interaction". At the same time, they nourish research on the cognitive mechanisms related to the sensation of space, in particular on the necessary coordination between the various sensory modalities (hearing, vision, proprioception, motricity, ...) for the perception and the representation of space. We seek to uncover the influence of the different acoustic cues (location, distance, reverberation...) used by the human central nervous system on the integration of sensory information and their interaction with emotional processes.
On the musical level, our ambition is to provide models and tools that enable composers to include sounds in a given space throughout the compositional process: from writing to concert. This contributes to making spatialization a parameter of musical writing. In the arts, this research also applies to post-production, to interactive sound installations, and to dance via the questions related to sound/space/body interaction. The incorporation of sound spatialization in virtual reality environments creates the opportunity for scientific applications to be used in neuroscience research, therapeutic systems, or transportation simulators.
Major themes
- Sound Spatialization: hybrid reverberation and spatial room impulse responses (SRIR), SRIR analysis-synthesis, hybrid reverberation and spatialized impulse responses; synthesis of sound fields via high-density spatial networks, WFS and HOA systems in the Espace de Projection, binaural listening, CONTINUUM Distributed spatialization
- Cognitive foundations: auditory spatial cognition, multisensory integration and emotion, entrecorps project, music and cerebral plasticity, perception of distance in augmented reality
- Creation / Mediation: audio rendering of spaces in the RASPUTIN
Collaborations
ARI-ÖAW (Autria), Bayerischer Rundfunk (Germany), BBC (United Kingdom), B<>COM (France), Ben Gurion University (Israel), Conservatoire national supérieur de musique et de danse de Paris (France), CNES (France), elehantcandy (Pays-Bas), France Télévisions (France), Fraunhofer ISS (Germany), Hôpital de la Salpêtrière (France), HEGP (France), Hôpital universitaire de Zürich (Germany), IRBA (France), IRT (Germany), L-Acoustics (France), Joanneum Research (Autria), LAM (France), McGill University (Canada), Orange-Labs (France), RWTH (Germany), Radio France (France), RPI (United-States)
Research topics and related projects
Corpus-Based Concatenative Synthesis
Database of recorded sounds and a unit selection algorithm
Axe Son-Musique-Santé
Cet axe transversal regroupe les recherches liées au bien-être et à la santé, effectuées dans le cadre du laboratoire STMS. Cet axe a plusieurs objectifs :
European and national projects
Continuum
The live performance augmented in its sound dimensions
DAFNE+
Decentralized platform for fair creative content distribution empowering creators and communities through new digital distribution models based on digital tokens
RASPUTIN
Simulation of architectural acoutics for a better spatial understanding using immersive navigation in real-time
HAIKUS
Artificial Intelligence applied to augmented acoustic Scenes
Software (design and development)
Spat~
ToscA
Panoramix
ADMix Tools
Team
Head Researcher : Olivier Warusfel
Researchers & Engineers : Markus Noisternig, Thibaut Carpentier, John BURNETT, Mathieu CARRÉ, Hélène Bahu, Benoît Alary, Isabelle Viaud-Delmon
Engineer : Coralie Vincent
Associated Researcher.s : Hélène Bahu
Trainee : Ulysse ROUSSEL