Acoustic and Cognitive Spaces
The team’s scientific disciplines are signal processing and acoustics for the elaboration of spatialized audio reproduction techniques and methods for the analysis/synthesis of a sound field. In parallel, the team devotes a large percentage of its time to cognitive studies on multisensorial integration for a rational development of new sonic mediations based on body/hearing/space interaction. The scientific activities described below are combined with the development of software libraries. These developments build on the team’s expertise, and its academic and experimental research activities and are the major vector of our relationship with musical creation and other application domains.
The work carried out concerning spatialization techniques are concentrated on models based on a physical formalism of the sound field. The primary objective is the development of a formal framework for the analysis/synthesis of the sound field using spatial room impulse responses (SRIR). The SRIRs are generally measured using spherical arrays featuring several dozen transducers (microphones and/or loudspeakers). The principal application concerns the development of convolution reverberators using these high spatial resolution SRIRs to faithfully reproduce the complexity of a sound field.
The technique of binaural spatialization using headphones is also a focus of our attention. The evolution of listening practices and the democratization of interactive applications tend to favor listening with headphones through smartphones. Taking advantage of this sonic immersion, binaural listening has become the primary vector of tridimensional listening. Based on the exploitation of headrelated transfer functions (HRTFs), it is the only approach that currently ensures a precise and dynamic reconstruction of the perceptual cues responsible for auditory localization. It has become the reference tool for experimental research in connection with spatial cognition in a multisensorial context and for virtual reality applications.
These 3D audio spatialization techniques associated with a tracking system that captures the movements of a performer or a member of the audience, constitute an organologic base essential for addressing questions on “musical, sound, and multimedia interaction”. They offer an opportunity to reflect on the “cognitive foundation” related to the feeling of space, in particular on the coordination necessary among various sensory modalities for the perception and cognition of space. More specifically, we wish to highlight the importance of the processes of integration between idiothetic cues (related to our motor actions) and the acoustic cues (localization, distance, reverberation, etc.) used by the central nervous system to create a spatial representation of the perceived environment.
On the musical level, our ambition is to provide models and tools that enable composers to include sounds in a given space throughout the compositional process: from writing to concert. This contributes to making spatialization a parameter of musical writing. In the arts, this research also applies to post-production, to interactive sound installations, and to dance via the questions related to sound/space/body interaction. The incorporation of sound spatialization in virtual reality environments creates the opportunity for scientific applications to be used in neuroscience research, therapeutic systems, or transportation simulators.
Research topics and related projects
Room Impulse Response Renderer
Étude de processus d’intégration multisensorielle
European and national projects
Contribution of audio and vestibular interactions in the perception of our own body
Collaborative Situated Media
Extended Frameworks For 'In-Time' Computer-Aided Composition
Study the mechanisms of interpersonal coordination in public human interactions.
Sensori-motor learning in gesture-based interactive sound systems
Object Based Broadcasting
Simulation of architectural acoutics for a better spatial understanding using immersive navigation in real-time
Personalised Virtual Reality Scenarios for Groups at Risk of Social Exclusion
Software (design and development)
Head Researcher : Olivier Warusfel
Researchers & Engineers : Markus Noisternig, David Poirier-Quinot, Thibaut Carpentier, Isabelle Viaud-Delmon
Doctoral Students : Franck Elisabeth, Franck Zagala, Pierre Massé, Vincent Martin
: Lise Hobeika, Marine Taffou
Research Composer : Nadine Schütz
Doctorant(e) invité(e) : Marta Gospodarek
ARI-ÖAW (Autria), Bayerischer Rundfunk (Germany), BBC (United Kingdom), B<>COM (France), Ben Gurion University (Israel), Conservatoire national supérieur de musique et de danse de Paris (France), CNES (France), elehantcandy (Pays-Bas), France Télévisions (France), Fraunhofer ISS (Germany), Hôpital de la Salpêtrière (France), HEGP (France), Hôpital universitaire de Zürich (Germany), IRBA (France), IRT (Germany), L-Acoustics (France), Joanneum Research (Autria), LAM (France), McGill University (Canada), Orange-Labs (France), RWTH (Germany), Radio France (France), RPI (United-States)