• Research

    The fundamental principle of IRCAM is to encourage productive interaction among scientific research, technological developments, and contemporary music production. Since its establishment in 1977, this initiative has provided the foundation for the institute’s activities. One of the major issues is the importance of contributing to the renewal of musical expression through science and technology. Conversely, sp…

    • Research Topics
    • The STMS Lab
    • Research Teams
    • Sound Systems and Signals: Audio/Acoustics, InstruMents
    • Acoustic and Cognitive Spaces
    • Sound Perception and Design
    • Sound Analysis-Synthesis
    • Sound Music Movement Interaction
    • Musical Representations
    • Analysis of Musical Practices
    • Projects
    • Sound Workshop
    • The Musical Body
    • Creative Dynamics
    • Finished Projects
    • Musique/Sciences Collection
  • Creation

    IRCAM is an internationally recognized research center dedicated to creating new technologies for music. The institute offers a unique experimental environment where composers strive to enlarge their musical experience through the concepts expressed in new technologies.

    • Composers & Artists in Studio
    • "I am in blood" for Sixteen Musicians and Live Electronics
    • Lucie Antunes in studio
    • Deena Abdelwahed in the studio
    • "Transfer" for 10 Musicians and Electronics
    • L'Annonce faite à Marie
    • In Ex Machina
    • Jazz Ex Machina
    • Improvise cum machina 1/2
    • Improvise cum machina 2/2
    • Like Sound, Like Flesh
    • Silent Talks
    • Music-Fictions
    • Artistic Research Residency
    • Artistic Residencies: The Blog
    • Rendez Vous 20.21
    • Season 2022.23
    • Seasons from 1996 to present
    • ManiFeste-2022 Website
    • ManiFeste festival from 2012 to 2022
    • L’Étincelle, IRCAM’s journal of creation
  • Transmission

    In support of IRCAM's research and creation missions, the educational program seeks to shed light on the current and future meaning of the interactions among the arts, sciences, and technology as well as sharing its models of knowledge, know-how, and innovations with the widest possible audience.

    • 2022.23 Training Courses
    • Max, Max for Live
    • OpenMusic
    • Modalys
    • TS2 and Partiels
    • Sound spatialization
    • From PureData to audio plugins
    • Sensors, Interfaces, and Interactive Machine Learning
    • Other training programs
    • Practical Information
    • Advanced Programs
    • Cursus Program on Composition and Computer Music
    • Supersonic Chair
    • Master ATIAM
    • Sound Design Master's Program
    • Music Doctorate
    • AIMove Master
    • School Programs
    • Studios of Creation
    • Mixed-Music
    • Artistic and Cultural Education
    • Career Discovery Visit
    • Images of a Work Collection
    • ManiFeste-2022, the Academy
  • Innovations

    At the center of societal and economic concerns combining culture and information technologies, the current research at IRCAM is seen by the international research community as a reference for interdisciplinary projects on the sciences and technologies for sound and music, constantly exposed to society’s new needs and uses.

    • The IRCAM Forum
    • Subscribe to the Forum
    • Softwares
    • Ircam Amplify
    • Industrial Applications
    • Industrial Licenses
    • Forum Vertigo
  • IRCAM
  • Careers & Job Offers
  • Calls for applications
  • Newsletter
  • Arrive
  • Boutique
  • Resource Center
  • News
  • login
  • En
  • Fr
  • IRCAM
  • Careers & Job Offers
  • Calls for applications
  • Newsletter
  • Arrive
  • Boutique
  • Resource Center

Fr | En

  • login
  • Research

    The fundamental principle of IRCAM is to encourage productive interaction among scientific research, technological developments, and contemporary music production. Since its establishment in 1977, this initiative has provided the foundation for the institute’s activities. One of the major issues is the importance of contributing to the renewal of musical expression through science and technology. Conversely, sp…

    • Research Topics
    • The STMS Lab
    • Research Teams
    • Sound Systems and Signals: Audio/Acoustics, InstruMents
    • Acoustic and Cognitive Spaces
    • Sound Perception and Design
    • Sound Analysis-Synthesis
    • Sound Music Movement Interaction
    • Musical Representations
    • Analysis of Musical Practices
    • Projects
    • Sound Workshop
    • The Musical Body
    • Creative Dynamics
    • Finished Projects
    • Musique/Sciences Collection
  • Creation

    IRCAM is an internationally recognized research center dedicated to creating new technologies for music. The institute offers a unique experimental environment where composers strive to enlarge their musical experience through the concepts expressed in new technologies.

    • Composers & Artists in Studio
    • "I am in blood" for Sixteen Musicians and Live Electronics
    • Lucie Antunes in studio
    • Deena Abdelwahed in the studio
    • "Transfer" for 10 Musicians and Electronics
    • L'Annonce faite à Marie
    • In Ex Machina
    • Jazz Ex Machina
    • Improvise cum machina 1/2
    • Improvise cum machina 2/2
    • Like Sound, Like Flesh
    • Silent Talks
    • Music-Fictions
    • Artistic Research Residency
    • Artistic Residencies: The Blog
    • Rendez Vous 20.21
    • Season 2022.23
    • Seasons from 1996 to present
    • ManiFeste-2022 Website
    • ManiFeste festival from 2012 to 2022
    • L’Étincelle, IRCAM’s journal of creation
  • Transmission

    In support of IRCAM's research and creation missions, the educational program seeks to shed light on the current and future meaning of the interactions among the arts, sciences, and technology as well as sharing its models of knowledge, know-how, and innovations with the widest possible audience.

    • 2022.23 Training Courses
    • Max, Max for Live
    • OpenMusic
    • Modalys
    • TS2 and Partiels
    • Sound spatialization
    • From PureData to audio plugins
    • Sensors, Interfaces, and Interactive Machine Learning
    • Other training programs
    • Practical Information
    • Advanced Programs
    • Cursus Program on Composition and Computer Music
    • Supersonic Chair
    • Master ATIAM
    • Sound Design Master's Program
    • Music Doctorate
    • AIMove Master
    • School Programs
    • Studios of Creation
    • Mixed-Music
    • Artistic and Cultural Education
    • Career Discovery Visit
    • Images of a Work Collection
    • ManiFeste-2022, the Academy
  • Innovations

    At the center of societal and economic concerns combining culture and information technologies, the current research at IRCAM is seen by the international research community as a reference for interdisciplinary projects on the sciences and technologies for sound and music, constantly exposed to society’s new needs and uses.

    • The IRCAM Forum
    • Subscribe to the Forum
    • Softwares
    • Ircam Amplify
    • Industrial Applications
    • Industrial Licenses
    • Forum Vertigo
  • Home
  • Research
  • Research Teams
  • Acoustic and Cognitive Spaces
  • Research Teams
  • Sound Systems and Signals: Audio/Acoustics, InstruMents
  • Acoustic and Cognitive Spaces
    • Collaborations
    • Research topics and related projects
    • European and national projects
    • Software (design and development)
    • Team
    • Publications
  • Sound Perception and Design
  • Sound Analysis-Synthesis
  • Sound Music Movement Interaction
  • Musical Representations
  • Analysis of Musical Practices






Acoustic and Cognitive Spaces

Edit






The Acoustic and Cognitive Spaces activity of research and development centers on the reproduction, the analysis/synthesis, and the perception of sound spaces.
Edit






The main team’s scientific disciplines are signal processing and acoustics for the elaboration of spatialized audio reproduction techniques and methods for the analysis/synthesis of a sound field.

In parallel, the team devotes a large percentage of its time to cognitive studies on multisensorial integration for a rational development of new sonic mediations based on body/hearing/space interaction. The scientific activities described below are combined with the development of software libraries. These developments build on the team’s expertise, and its academic and experimental research activities and are the major vector of our relationship with musical creation and other application domains.

The work carried out concerning spatialization techniques are concentrated on models based on a physical formalism of the sound field. The primary objective is the development of a formal framework for the analysis/synthesis of the sound field using spatial room impulse responses (SRIR). The SRIRs are generally measured using spherical arrays featuring several dozen transducers (microphones and/or loudspeakers). The principal application concerns the development of convolution reverberators using these high spatial resolution SRIRs to faithfully reproduce the complexity of a sound field.

Binaural Spatialization techniques using headphones is also a focus of our attention. The evolution of listening practices and the democratization of interactive applications tend to favor listening with headphones through smartphones. Taking advantage of this sonic immersion, binaural listening has become the primary vector of tridimensional listening. Based on the exploitation of headrelated transfer functions (HRTFs), it is the only approach that currently ensures a precise and dynamic reconstruction of the perceptual cues responsible for auditory localization. It has become the reference tool for experimental research in connection with spatial cognition in a multisensorial context and for virtual reality applications.

These 3D audio spatialization techniques associated with a tracking system that captures the movements of a performer or a member of the audience, constitute an organological base essential for addressing the issues of "musical, sound, and multimedia interaction". At the same time, they nourish research on the cognitive mechanisms related to the sensation of space, in particular on the necessary coordination between the various sensory modalities (hearing, vision, proprioception, motricity, ...) for the perception and the representation of space. We seek to uncover the influence of the different acoustic cues (location, distance, reverberation...) used by the human central nervous system on the integration of sensory information and their interaction with emotional processes.

On the musical level, our ambition is to provide models and tools that enable composers to include sounds in a given space throughout the compositional process: from writing to concert. This contributes to making spatialization a parameter of musical writing. In the arts, this research also applies to post-production, to interactive sound installations, and to dance via the questions related to sound/space/body interaction. The incorporation of sound spatialization in virtual reality environments creates the opportunity for scientific applications to be used in neuroscience research, therapeutic systems, or transportation simulators.

Major themes

  • Sound Spatialization: hybrid reverberation and spatial room impulse responses (SRIR),  SRIR analysis-synthesis, hybrid reverberation and spatialized impulse responses; synthesis of sound fields via high-density spatial networks, WFS and HOA systems in the Espace de Projection, binaural listening, CONTINUUM Distributed spatialization
  • Cognitive foundations: auditory spatial cognition, multisensory integration and emotion, entrecorps project, music and cerebral plasticity, perception of distance in augmented reality
  • Creation / Mediation: audio rendering of spaces in the RASPUTIN

Team Website

Edit
  • Studio 1  © Laurent Ardhuin, UPMC
    Studio 1 © Laurent Ardhuin, UPMC
  • Studio 1  © Philippe Barbosa
    Studio 1 © Philippe Barbosa
  • Studio 1  © Philippe Barbosa
    Studio 1 © Philippe Barbosa
  • © Philippe Barbosa
    © Philippe Barbosa
  • Projet VERVE  © Cyril Fresillon, CNRS
    Projet VERVE © Cyril Fresillon, CNRS
  • Projet VERVE  © Cyril Fresillon, CNRS
    Projet VERVE © Cyril Fresillon, CNRS
  • Studio 1  © Philippe Barbosa
    Studio 1 © Philippe Barbosa
  • L'Espace de projection équipé de la WFS  © Philippe Migeat
    L'Espace de projection équipé de la WFS © Philippe Migeat
  • L'Espace de projection équipé de la WFS  © Philippe Migeat
    L'Espace de projection équipé de la WFS © Philippe Migeat

Collaborations

ARI-ÖAW (Autria),  Bayerischer Rundfunk (Germany), BBC (United Kingdom), B<>COM (France), Ben Gurion University (Israel), Conservatoire national supérieur de musique et de danse de Paris (France), CNES (France),  elehantcandy (Pays-Bas), France Télévisions (France), Fraunhofer ISS (Germany), Hôpital de la Salpêtrière (France), HEGP (France), Hôpital universitaire de Zürich (Germany), IRBA (France), IRT (Germany), L-Acoustics (France), Joanneum Research (Autria), LAM (France), McGill University (Canada), Orange-Labs (France), RWTH (Germany), Radio France (France), RPI (United-States)


Research topics and related projects

Binaural Listening

Corpus-Based Concatenative Synthesis

Database of recorded sounds and a unit selection algorithm

Urban and Landscape Composition

WFS and Ambisonic Systems in the Espace de Projection

European and national projects

HAIKUS

Artificial Intelligence applied to augmented acoustic Scenes

Continuum

The live performance augmented in its sound dimensions

DAFNE+

Decentralized platform for fair creative content distribution empowering creators and communities through new digital distribution models based on digital tokens

RASPUTIN

Simulation of architectural acoutics for a better spatial understanding using immersive navigation in real-time



Software (design and development)

product

SPAT Revolution

Real-time 3D-audio mixing engine, created for audio professionals.
buylearn more
product

Spat

free
Spat~ is a library dedicated to sound spatialization in realtime. Originally designed as a library, it enables musicians and sound engineers to control the spatial sound processing for various sound broadcasting systems. Applications range from reproducing sound in a home setting to concert situations to holophonic and interactive sound installati…
download
product

ToscA

free
ToscA is a plugin that makes it possible to send and receive the automation parameters of a digital audio workstation towards or from other applications applications using the OSC protocol. Its application typically concerns the production of object-oriented spatialized mixes independently of the host software constraints.
download
product

Panoramix

included in your membership
Panoramix is a post-production workstation for 3D-audio contents. This tool offers a comprehensive environment for mixing, reverberating, and spatializing sound materials from different microphone systems: main tree, spot microphones, Higher Order Ambisonics capture.
subscribemore
product

ADMix Tools

free
The ADMix tool suite can be used for recording (ADMix Recorder) and reproduction (ADMix Renderer) of objectbased audio contents. The object-based format follows the audio definition model (ADM) defined by international standardization bodies (European Broadcast Union and International Telecommunication Union).
download

Team


Head Researcher : Olivier Warusfel
Researchers & Engineers : Markus Noisternig, Thibaut Carpentier, Benoît Alary, Isabelle Viaud-Delmon
Doctoral Student : Anatole Moreau
Trainees : Dorian Vernet, Alexandre Philippon
Engineer : Coralie Vincent

Collaborations

ARI-ÖAW (Autria),  Bayerischer Rundfunk (Germany), BBC (United Kingdom), B<>COM (France), Ben Gurion University (Israel), Conservatoire national supérieur de musique et de danse de Paris (France), CNES (France),  elehantcandy (Pays-Bas), France Télévisions (France), Fraunhofer ISS (Germany), Hôpital de la Salpêtrière (France), HEGP (France), Hôpital universitaire de Zürich (Germany), IRBA (France), IRT (Germany), L-Acoustics (France), Joanneum Research (Autria), LAM (France), McGill University (Canada), Orange-Labs (France), RWTH (Germany), Radio France (France), RPI (United-States)


Publications

Also discover

News
Research

Thibaut Carpentier, Laureate of the CNRS 2018 Cristal Medal

Article
L'Unité mixte de recherche Sciences et technologies de la musique et du son a l’immense plaisir de voir récompensé par un Cristal CNRS le travail de Thibaut Carpentier, ingénieur d’études dans l’équi…
Ircam

1, place Igor-Stravinsky
75004 Paris
T. +33 1 44 78 48 43
Opening times

Monday through Friday 9:30am-7pm Closed Saturday and Sunday
Subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles
  • Jobs Offers & Internships
  • The IRCAM team
  • Partners
  • Support IRCAM
Institut de Recherche et de Coordination Acoustique/Musique

go to :
  • Centre Pompidou
  • Legal Notes
  • General Conditions
  • Espace Pro
Copyright © 2023 Ircam. All rights reserved.