Acoustic and Cognitive Spaces
The research and development activities of the Acoustic and Cognitive Spaces team focus on the reproduction, analysis/synthesis, and perception of sound scenes.

Dôme ambisonique à l'Ircam
© Ircam-Centre pompidou, photo : Quentin Chevrier
The team’s scientific disciplines include signal processing and acoustics for the development of spatial audio reproduction techniques and methods for analyzing and synthesizing sound fields. At the same time, the team conducts extensive cognitive studies on multisensory integration to support the informed development of new sound mediation methods based on the interaction between body, hearing, and space. The scientific research activities are closely linked to the development of software libraries. These developments record the team’s know-how, support theoretical and experimental research, and serve as the main interface with musical creation and other applied domains.
Work on spatialization techniques focuses on models based on a physical formalism of the sound field. The main objective is to develop a formal framework for analyzing and synthesizing sound fields using Spatial Room Impulse Responses (SRIRs). SRIRs are generally measured with spherical arrays containing several dozen transducers (microphones and/or loudspeakers). The main application is the development of convolution reverberators using these high-resolution SRIRs to faithfully reproduce the complexity of the sound field. Analysis-synthesis methods also allow these SRIRs to be treated as structured, manipulable musical materials. In addition, synthesis methods are currently used to generate a corpus of realistic impulse responses capable of feeding machine learning processes.
Binaural spatialization over headphones is also a focus of the team. Evolving listening practices and the widespread use of interactive applications favor headphone listening, particularly via smartphones. Thanks to its immersive qualities, binaural listening has become the primary means of 3D sound perception. Based on Head-Related Transfer Functions (HRTFs), it remains the only approach that allows accurate and dynamic reconstruction of the auditory cues responsible for localization. It is an essential tool for experimental research on spatial cognition in multisensory contexts and for virtual reality applications.
At the same time, object-based audio (OBA) formats are being adopted in production and distribution chains, especially in music and film. Current productions largely rely on the Audio Definition Model (ADM), which underlies Dolby Atmos. While the OBA paradigm potentially allows full user interaction with a sound scene (navigation in 6 degrees of freedom, transformation of object position or orientation), the existing ADM format has significant limitations that restrict its uses (often limited to 3-DoF rotation of the listener at the center of the scene). The team is currently working on developing a hierarchical metadata scheme, associated with the concept of acoustic objects, to extend ADM to interactive 6-DoF applications.
These 3D audio spatialization techniques, combined with systems for capturing the movements of performers or listeners in space, provide a fundamental organological basis for addressing musical, sound, and multimedia interaction. They also inform research on cognitive mechanisms related to the sensation of space, particularly the coordination of multiple sensory modalities (hearing, vision, proprioception, motor activity) for the perception and representation of space. The team studies how different acoustic cues (localization, distance, reverberation, etc.) used by the human central nervous system influence sensory integration and their interaction with emotional processes.
The team is also involved in the Sound–Music–Health axis. In this context, it conducts studies on atypical sound perception in children with moderate to severe autism and participates in research on characterizing and treating tinnitus using virtual reality.
In the musical domain, the team aims to provide models and tools that allow composers to integrate spatialization of sounds from the composition stage to the concert situation, thereby elevating spatialization to a compositional parameter. More broadly, these studies apply to post-production, interactive sound installations, and dance through the challenges of sound/space/body interaction.
Olivier Warusfel
Responsable d'équipe
Isabelle Viaud-Delmon
Chercheuse
Markus Noisternig
Chercheur
Coralie Vincent
Ingénieure
Thibaut Carpentier
Ingénieur
Benoît Alary
Chargé de R&D
Valentin Bauer
Chargé de recherche
Alice Pain
Doctorante
Anthony Gallien
Doctorant
Louis GOSSELIN
Stagiaire
Paulin Roman
Stagiaire
- Sound spatialization: hybrid reverberation and Spatial Room Impulse Responses (SRIRs), SRIR analysis-synthesis, sound field synthesis with high-density arrays, WFS and HOA systems, binaural listening, distributed spatialization, augmented reality.
- Cognitive foundations: multisensory and emotional integration, music and brain plasticity, perception of distance in augmented reality.
- Creation / Mediation: room auralization, urban and landscape composition, directivity synthesis.
Related Projects
See all projectsAxe Son-Musique-Santé
This cross-cutting Sound–Music & Health strand brings together research related to wellbeing and health conducted within the STMS laboratory.
Synthesis of Directionality by Corpus
Aaron Einbond's artistic residency focuses on the cohabitation of instrumental and synthetic sounds in a diffusion space.The playing of an instrumentalist on stage is captured and analyzed in real time: different audio descriptors (timbral) are computed and exploited to produce electronics by concatenative synthesis by corpus (realized by CataRT-MuBu). The question of the diffusion of the samples (grains) of the corpus is then raised. For this purpose, we use a compact array of IKO loudspeakers, which allows us to simulate radiation patterns (described by their representation in third order spherical harmonics). The radiation patterns used here are selected from a directivity database of (historical and modern) acoustic instruments measured and made available by TU Berlin: the audio descriptors (from the player's playing) are used to select one (or more) instruments from the TU database in order to apply their directivity pattern to the grains. The underlying idea is not to faithfully reproduce the spatial radiation of instruments, but to give the synthesized sounds a "natural, plausible" spatiality, so that the electronics merge harmoniously with the acoustic instruments present on stage.
Urban and Landscape Composition
Related Software
Spat Revolution
The Immersive Audio Revolution, providing artists, sound-designers and sound-engineers virtually unlimited possibilities to design, create and mix an outstanding real-time immersive experience.
Software for sale at Ircam
Spatialisation
Spat
Spat is a software suite for spatialization of sound signals in real-time intended for musical creation, postproduction, and live performances.
Free software
Spatialisation
Panoramix
Panoramix is a standalone application dedicated to spatial audio mixing and post-production.
Software included in premium subscription
Sound design and processing
Spatialisation