Creation of interactive environments
Real-time software and hardware interactions: generative AI with RAVE and Diffusion, (inter)reactive improvisation agents with Dicy2 and Somax2, and gesture-sound interaction with RIoT, Max, and MuBu sensors.
Real-time software and hardware interactions: generative AI with RAVE and Diffusion, (inter)reactive improvisation agents with Dicy2 and Somax2, and gesture-sound interaction with RIoT, Max, and MuBu sensors.
The ACIDS group, part of Ircam’s Musical Representations team, conducts research on deep generative models for musical and creative synthesis, with the goal of developing new tools that model musical creativity and expand sonic possibilities through machine learning approaches. This work materializes in real-time objects integrated into software environments such as MaxMSP and Ableton Live, notably through the RAVE, AFTER, FlowSynth models and the nn~ library. These tools enable exploration of innovative forms of sound creation, by developing instruments and deep audio models that embed artificial creative intelligence into musical performance.
In addition, the work of the Musical Representations team on generative processes—most recently as part of the ERC REACH (Raising Co-Creativity in Cyber-Human Musicianship) research project—has led to the development of innovative software tools. These tools enable the creation of interactive devices and the exploration of new forms of interaction between composers, musicians, and improvisers. Leveraging artificial listening and the synchronization of musical signals, Dicy2 and Somax2 implement intelligent agents capable of learning either in real-time or beforehand, facilitating immediate or pre-programmed musical interactions in both structured and improvised contexts.
Meanwhile, the Prototypes & Engineering unit, in coordination with the Sound Music Movement Interaction team, stands out for its work on real-time gesture and sound interaction technologies, particularly for the performing arts. One of its flagship projects is the development of R-IoT wireless sensors, which detect gestures using motion sensors and a wireless microcontroller, allowing the creation of interactive experiences where performers’ gestures directly influence the sound.
The Sound Music Movement Interaction team, dedicated to researching interactive systems, combines technologies for analysis, synthesis, and real-time sound processing with synchronization tools. These contribute to the emergence of new forms of stage performance, blending music, dance, and multimedia.
These technologies not only enhance the artist’s experience, but also provide audiences with interactive performances, where music, dance, visuals, and virtual reality intertwine to create a unique immersive experience. In 2024, based on the results of end-of-training evaluations, all participants who completed the "Composing an Interactive Environment and Preparing a Performance with Generative Agents" training program have achieved all or most of their objectives, with a satisfaction rate of 100%.
Musician, composer, lecturer, and doctoral researcher
"The week-long training course on 'Deep Generative Audio Models and AI in Max and Ableton Live' was, without a doubt, one of my most intellectually stimulating and artistically inspiring experiences to date. The quality of the teaching and resources was exemplary. I highly recommend this training to others and will return for future sessions in other areas. Thank you to Philippe and the entire team! "
Creative Technologist
"The workshop was an incredible experience—the perfect blend of creativity, theory, and technical expertise. The diversity of IRCAM's researchers and teachers, each bringing unique backgrounds and perspectives, made it exceptionally enriching. I now feel encouraged to deepen my experiments with machine learning and sound."
Professor of Music Composition
"The musical experience was an excellent complement to the technical instruction."