Maxime Mantovani


Maxime Mantovani (b. 1984, France) is a contemporary music composer and improviser. He graduated with a Master's degree in composition from the CNSMD of Lyon, with honors and unanimous approval of the jury, under the direction of François Roux. His choice of musical writing is strongly inspired by technology, more precisely by computer music, sound synthesis, and electronic instruments. He is interested in electroacoustic lutherie to design custom instruments. These hardware and software interfaces allow him to rediscover the elasticity and plasticity of the musical manipulation of analog studios. He adapts to his interfaces sound treatments specially developed to be played in dialogue with live musicians.

Mantovani attaches great importance to the technique of instrumental writing and the latest advances in computer-assisted composition; each new composition is the opportunity for an exchange with the instrumentalists. To better understand the kinesthesia linked to their instruments is for him a resource for instrumental and electronic writing. The challenge is to find forms of notation that are simple, accurate and strong in sound meaning. He obtained various grants to help finance his work in the creation of digital audio control interfaces, as well as residency grants to further his work. He is currently in artistic research residency at IRCAM under the direction of Philippe Esling, in the Artificial Intelligence research team ACIDS. The objective of this residency is to improve his electronic lutherie models and to control in a musical way the new forms of sound generation proposed by AI.

Photo © Émilie Zasso

.2020.21 Artistic Research Residency 

Improvisation, deep learning and prior matching

Research Theme
Creation of electronic interfaces tailored to the control of real-time systems using state-of-the-art artificial intelligence models.
In collaboration with IRCAM-STMS Musical Representations

The focus of this residency is to design an interface— both hardware and software—specifically imagined for the real-time control of artificial intelligence models. The idea of this interface is to offer new and innovative ways of generating expressive electroacoustic sound using the latest deep neural network algorithms. A dedicated electronic interface enables the development of instrumental gestures closely related to the sound, while ensuring reproducibility and a high degree of expression through the involvement of the body. AI systems have great potential to generate highly musical expressive sound. The combination of these two aspects of sound generation offers fascinating and unexpected possibilities. Maxime Mantovani will use his interfaces while improvising with two musicians; a percussionist and a tuba player. Using these interfaces in an improvised music context firstly teaches the user how to use them, and secondly, places these interfaces in a situation where the electroacoustic system must be as reactive as an acoustic musician. The learning of deep networks is not limited to sound databases, but extends to databases of the gesture produced and recorded during the improvisations carried out during the residency.