10 a.m.- 5 p.m.
programme-les-sons-pour-les-soins-2025.pdf
In English
Free, reservations required
The Sound/Music and Health axis of the STMS laboratory is organizing a second annual day focused on its research projects. This axis brings together the laboratory’s activities related to the use of sound for the prevention and promotion of health and well-being, for interventions in patient care, and for the mechanisms of music and sound perception in humans.
Its objectives are to pool efforts to increase the effectiveness, visibility, and legitimacy of actions (notably through medical collaborations) and to broaden the scope of research.
This day will feature presentations of projects by pairs or trios, composed of one or two researchers in the field of health and one researcher in the field of signal processing, computer science, and music. The projects will address the themes of sonification and motor disorders, atypical sound perception in autism, hidden hearing loss, auditory salience of sound scenes in the hearing impaired, and peripersonal space and hearing deficits.
Program
10am | Introduction
with Isabelle Viaud-Delmon, CNRS researcher, Acoustic and Cognitive Spaces team at the STMS laboratory
10:15am | Transforming Body Perception through Sound, Movement, and Walking: Neuroscientific, Human-Computer Interaction and AI-driven Approaches and Applications for Health
with Ana Tajaduria-Jimenez, Associate Professor, i_mBODY Lab, Universidad Carlos III de Madrid (Spain) and Honorary Research Fellow UCL Interaction Centre, University College London (UK)
Body perception is essential for motor, social, and emotional functioning, as well as for health, and is continuously shaped by sensorimotor information. Building on neuroscience, HCI, and AI, our lab investigates Body Transformation Experiences—perceptual illusions of one’s body changing created through sensory feedback. This talk will present our work showing how sound feedback during walking can reshape body perception and influence emotion, behavior, and identity. I will share examples of how our sound-driven, body-centered technologies can support health in both controlled and real-world settings, while also serving as tools to study multisensory influences on body perception. We aim to develop a framework for individualized, long-lasting sensory manipulation of body perception, grounded in four pillars: multisensory neuroscience, AI-based modeling, wearable interaction design, and field studies. Finally, I will identify challenges and opportunities in this research field.
10:45am | Multisensory Contributions to the Locomotor Development
with Yury Ivanenko, Head of Gait Laboratory, IRCCS Fondazione Santa Lucia, Rome (Italy)
Human locomotor development relies on early-emerging rhythm generators and on the progressive integration of multisensory information. Vision, somatosensation, vestibular signals, and auditory cues jointly shape postural control and the maturation of stepping patterns. While my presentation will focus on the basic motor features of rhythmic pattern generation, from the perinatal period, infants experience a rich rhythmic acoustic environment that entrains spontaneous motor tempo and supports the emergence of coordinated locomotor rhythms. As gait circuits mature, sensory feedback tunes muscle synergies and adaptive control. Auditory inputs provide spatial and temporal predictions that stabilize movement, while locomotion itself modulates auditory cortical processing, revealing a bidirectional coupling. Understanding these interactions offers insight into locomotor maturation and may inform sensory-based strategies for gait enhancement and rehabilitation.
Discussion
11:35am | Role of Atypical Auditory Perception in the Well-being of Children with Autism and Intellectual Disability
with Pierre-Luc Bossé, Director of the Life Projects and Innovation Support Unit, Fondation John BOST, Pierre Cornaggia, Coordinator of Psychoeducational Practices – NDD and Complex Situations Expert, Fondation John BOST; and Valentin Bauer, postdoctoral researcher, Acoustic and Cognitive Spaces team, STMS Laboratory
Nonverbal or minimally verbal children with autism and intellectual disabilities often exhibit atypical auditory perception, making some sounds aversive and others pleasant, with a significant impact on their daily functioning and the support they can receive. Hearing protections provide temporary relief but can, in the long term and in their absence, exacerbate issues of overstimulation and phonophobia. Some assessment methods exist, based on caregiver questionnaires or valence ratings using smileys, but they remain poorly adapted for these populations. To address these limitations, we have designed a behavioral method using looming sounds in audio augmented reality. These sounds are pre-selected for each child by two healthcare professionals from a corpus of familiar sounds created through a participatory approach. A multicenter study is currently underway to evaluate the validity of this method.
Lunch Break
1:45pm | Bridging the Gap between the Lab and the Clinic: From Animal Models and Human Research to Clinical Diagnostics for Cochlear Synaptopathy
with Fabrice Giraudet, Associate Professor, Biophysics Team (UMR INSERM 1107, Faculty of Medicine of Clermont-Ferrand, University Clermont Auvergne) and Emmanuel Ponsot, CNRS Researcher, Sound Perception and Design Team, STMS Laboratory
Cochlear synaptopathy, or "hidden hearing loss," represents a significant challenge in auditory science. This condition, characterized by the loss of synapses between inner hair cells and auditory nerve fibers, does not affect pure-tone hearing thresholds, rendering it undetectable by standard audiometry. This presentation will feature an interactive discussion between two researchers —one in human auditory neuroscience and one specialized in animal models, who also works as a hearing-care professional—on the quest to assess cochlear synaptopathy in living humans. We will briefly review the seminal discoveries in animal models that unveiled this pathology and discuss the current understanding of how it disrupts neural coding within the auditory system. The presentation will then evaluate current neurophysiological approaches (e.g., ABR, MEMR, EFR) used in both animal and human research to identify sensitive biomarkers. A central challenge we will address is the difficulty in quantifying the impact of synaptopathy on auditory perception, particularly its contribution to the deficits in speech understanding that emerge in challenging listening situations. We will conclude by discussing why the future of hearing healthcare hinges on the development and clinical translation of novel diagnostic tools capable of assessing supra-threshold auditory deficits like cochlear synaptopathy, ultimately paving the way for early intervention and personalized treatment strategies.
2:40pm | Possible Sources of Abnormal Saliency for Hearing Impaired People with Hearing Aids
with Paul Avan, Director of the Center for Research and Innovation in Human Audiology, IHU reConnect and Armand Schwarz, PhD student, Sound Perception and Design team at the STMS laboratory and Institut Pasteur, IHU reConnect
Hearing impaired listeners often report that certain everyday sounds (kitchen noises, vehicle sounds…), become abnormally intrusive when using hearing aids. Hearing-aid acousticians try to cope with the complaints by assuming that they mostly come from excessive loudness, in relation to the loss of physiological cochlear compression associated with most types of hearing loss. The combination of hearing loss and hyperacusis is sometimes invoked, even more since, for lack of vocabulary, patients often describe uncomfortable sounds as being too loud. We propose that their complaint arises from an abnormal salience associated with these sounds. Auditory salience is defined as a bottom-up process that directs attention to specific sounds within a complex acoustic scene, thanks to the physical properties of the stimuli. We tried to find some physical dimensions of the sound that through the hearing aid and hearing impairment of the sound could yield a greater perceptual difference. For that we investigated two potential peripheral contributors to this issue: distortions in loudness growth (by measuring the loudness functions of different particular complex sounds) and timbre perception (by doing a multi-dimensional analysis of complex synthetic stimuli).
3:35pm | Multisensory Experiments for Everyone: What Smartphones Can Bring to Hearing Care and Experimental Studies
with Elisa Taffoureau, Audiologist, PhD student at the ENS and Ulysse Roussel, PhD student, Acoustic and Cognitive Spaces team at the STMS laboratory
This presentation explores how smartphones can enable precise, scalable, and accessible multisensory experiences outside laboratory settings, in more ecological environments. Building on our study demonstrating millisecond-level accuracy in audio–tactile reaction-time measurements on Android devices, we show how smartphones can be assessed and validated as reliable tools for investigating perception, attention, and multisensory integration. Beyond research, these tools open new possibilities, particularly for audiological practice. By allowing clinicians to collect data from patients in their real-world environments, they could help provide a more refined understanding of individual profiles and support the personalised adjustment of hearing aids. Such tools may ultimately pave the way for future personalised sound-comfort systems based on continuous, user-centred data.
Conclusion
Closing coffee
Organization
Sound Interaction, Music and Movement, Sound Perception and Design, and Acoustic and Cognitive Spaces teams at the STMS laboratory (IRCAM, Sorbonne University, CNRS, Ministry of Culture)