Acoustic and Cognitive Spaces

The Acoustic and Cognitive Spaces activity of research and development centers on the reproduction, the analysis/synthesis, and the perception of sound spaces.

The team’s scientific disciplines are signal processing and acoustics for the elaboration of spatialized audio reproduction techniques and methods for the analysis/synthesis of a sound field. In parallel, the team devotes a large percentage of its time to cognitive studies on multisensorial integration for a rational development of new sonic mediations based on body/hearing/space interaction. The scientific activities described below are combined with the development of software libraries. These developments build on the team’s expertise, and its academic and experimental research activities and are the major vector of our relationship with musical creation and other application domains.

The work carried out concerning spatialization techniques are concentrated on models based on a physical formalism of the sound field. The primary objective is the development of a formal framework for the analysis/synthesis of the sound field using spatial room impulse responses (SRIR). The SRIRs are generally measured using spherical arrays featuring several dozen transducers (microphones and/or loudspeakers). The principal application concerns the development of convolution reverberators using these high spatial resolution SRIRs to faithfully reproduce the complexity of a sound field.

The technique of binaural spatialization using headphones is also a focus of our attention. The evolution of listening practices and the democratization of interactive applications tend to favor listening with headphones through smartphones. Taking advantage of this sonic immersion, binaural listening has become the primary vector of tridimensional listening. Based on the exploitation of headrelated transfer functions (HRTFs), it is the only approach that currently ensures a precise and dynamic reconstruction of the perceptual cues responsible for auditory localization. It has become the reference tool for experimental research in connection with spatial cognition in a multisensorial context and for virtual reality applications.

These 3D audio spatialization techniques associated with a tracking system that captures the movements of a performer or a member of the audience, constitute an organologic base essential for addressing questions on “musical, sound, and multimedia interaction”. They offer an opportunity to reflect on the “cognitive foundation” related to the feeling of space, in particular on the coordination necessary among various sensory modalities for the perception and cognition of space. More specifically, we wish to highlight the importance of the processes of integration between idiothetic cues (related to our motor actions) and the acoustic cues (localization, distance, reverberation, etc.) used by the central nervous system to create a spatial representation of the perceived environment.

On the musical level, our ambition is to provide models and tools that enable composers to include sounds in a given space throughout the compositional process: from writing to concert. This contributes to making spatialization a parameter of musical writing. In the arts, this research also applies to post-production, to interactive sound installations, and to dance via the questions related to sound/space/body interaction. The incorporation of sound spatialization in virtual reality environments creates the opportunity for scientific applications to be used in neuroscience research, therapeutic systems, or transportation simulators.

  • Studio 1  © Laurent Ardhuin, UPMC
    Studio 1 © Laurent Ardhuin, UPMC
  • Studio 1  © Philippe Barbosa
    Studio 1 © Philippe Barbosa
  • Studio 1  © Philippe Barbosa
    Studio 1 © Philippe Barbosa
  • © Philippe Barbosa
    © Philippe Barbosa
  • Projet VERVE  © Cyril Fresillon, CNRS
    Projet VERVE © Cyril Fresillon, CNRS
  • Projet VERVE  © Cyril Fresillon, CNRS
    Projet VERVE © Cyril Fresillon, CNRS
  • Studio 1  © Philippe Barbosa
    Studio 1 © Philippe Barbosa
  • L'Espace de projection équipé de la WFS  © Philippe Migeat
    L'Espace de projection équipé de la WFS © Philippe Migeat
  • L'Espace de projection équipé de la WFS  © Philippe Migeat
    L'Espace de projection équipé de la WFS © Philippe Migeat


European and national projects

Rasputin

Simulation of architectural acoutics for a better spatial understanding using immersive navigation in real-time

Audioself

Contribution of audio and vestibular interactions in the perception of our own body

BiLi

Binaural Listening

Cosima

Collaborative Situated Media

EFFICAC(e)

Extended Frameworks For 'In-Time' Computer-Aided Composition

ENTRECORPS

Study the mechanisms of interpersonal coordination in public human interactions.

Legos

Sensori-motor learning in gesture-based interactive sound systems

ORPHEUS

Object Based Broadcasting

Rasputin

Simulation of architectural acoutics for a better spatial understanding using immersive navigation in real-time

Verve

Personalised Virtual Reality Scenarios for Groups at Risk of Social Exclusion


Software (design and development)

product

SPAT Revolution

Real-time 3D-audio mixing engine, created for audio professionals.
product

Spat

A tool dedicated to spatializing sound in real-time.
product

ToscA

Spat plugin makes it possible to send/receive automation parameters via the network.
product

Panoramix

Panoramix est une station de travail pour la postproductionde contenus audio 3D
product

ADMix Tools

The ADMix tool suite can be used for recording and reproduction of objectbased audio contents.

Team


Head Researcher : Olivier Warusfel
Researchers & Engineers : Markus Noisternig, David Poirier-Quinot, Thibaut Carpentier, Isabelle Viaud-Delmon
Doctoral Students : Franck Elisabeth, Franck Zagala, Pierre Massé, Vincent Martin
Guest Researcher : Marine Taffou
Research Composer : Nadine Schütz

Publications