Alberto Maria Gatti

Alberto Maria Gatti (1992) is a composer, computer music and sound designer. His main interest is electroacoustic music, ranging from acousmatic music to musical theater. Since 2018, he has been exploring the relationship between body and sound, using vibrating transducers to transform objects and bodies into acoustic diffusers. He has participated in contemporary music festivals and events at institutions such as Ircam (Journée Portes Ouvertes 2023), the Berlin Biennale, Inner Spaces, Tempo Reale, Museo Pecci, Milano Books, Milano Musica, Fabbrica Europa, Teatro dell'Opera di Roma, Forum Wallis, and others. He studied electronic music at the Florence Conservatory with Marco Ligabue and Simone Conforti, and obtained the AReMus master's degree at the Rome Conservatory. He then studied composition and sound direction with Tiziano Manca, Vittorio Montalti, Alvise Vidolin and Roberto Castello. Since 2021, he has been a sound designer with Musi-co. He is currently a freelance composer and teaches at the Fiesole School of Music and the G. Puccini Conservatory in La Spezia. Puccini Conservatory in La Spezia.

2023.24 Artistic Research Residency


Research Themes :
Design an experimental model of an intelligent real-time environment for an audio-tactile device specialized in spatialization and accurate sound reproduction.
Field of Research :

Composing with audio-tactile vibrations
In collaboration with the IRCAM-STMS teams Perception and Sound Design and Interaction Sound Music Mouvement.

The use of vibrating transducers has seen a variety of applications in recent years, ranging from the electroacoustic to the strictly artistic. In particular, bone conduction of sound has undergone major developments, drawing attention to a new idea of sound perception. The problem behind this practice often stems from the difficulty of organizing the sound content to be broadcast with vibrating transducers, often ill-suited to faithful sound restitution. The aim of this project is to create software for real-time analysis and automatic adaptation of sound content on devices involving one or more vibrating transducers, thus also resolving the management of spatial sound diffusion. To this end, the software also provides a sound flow control system using motion sensors applied to users or potential performers. The outcome of the project will be the application of a tool capable of studying the relationship between audio-tactile musical perception during a live performance, exploiting a hybrid bone-cranial transduction system side-by-side with a traditional multi-channel system.