The fundamental principle of IRCAM is to encourage productive interaction among scientific research, technological developments, and contemporary music production. Since its establishment in 1977, this initiative has provided the foundation for the institute’s activities. One of the major issues is the importance of contributing to the renewal of musical expression through science and technology. Conversely, sp…
IRCAM is an internationally recognized research center dedicated to creating new technologies for music. The institute offers a unique experimental environment where composers strive to enlarge their musical experience through the concepts expressed in new technologies.
In support of IRCAM's research and creation missions, the educational program seeks to shed light on the current and future meaning of the interactions among the arts, sciences, and technology as well as sharing its models of knowledge, know-how, and innovations with the widest possible audience.
Engaged with societal and economic issues at the intersection of culture and IT, research at Ircam has forged a reputation for itself in the world of international research as an interdisciplinary benchmark in the science and technology of sound and music, constantly attentive to the new needs and uses in society.
The fundamental principle of IRCAM is to encourage productive interaction among scientific research, technological developments, and contemporary music production. Since its establishment in 1977, this initiative has provided the foundation for the institute’s activities. One of the major issues is the importance of contributing to the renewal of musical expression through science and technology. Conversely, sp…
IRCAM is an internationally recognized research center dedicated to creating new technologies for music. The institute offers a unique experimental environment where composers strive to enlarge their musical experience through the concepts expressed in new technologies.
In support of IRCAM's research and creation missions, the educational program seeks to shed light on the current and future meaning of the interactions among the arts, sciences, and technology as well as sharing its models of knowledge, know-how, and innovations with the widest possible audience.
Engaged with societal and economic issues at the intersection of culture and IT, research at Ircam has forged a reputation for itself in the world of international research as an interdisciplinary benchmark in the science and technology of sound and music, constantly attentive to the new needs and uses in society.
As part of its research on deep generative models, the work of the ACIDS group in the Musical Representations team at IRCAM is expressed in the design of several machine learning tools for musical and creative synthesis. The goal is to provide novel tools to model musical creativity and extend sonic possibilities with machine learning approaches. In this context, the team experiment with deep AI models applied to creative materials, aiming to develop artificial creative intelligence. Over the past years, the ACIDS group developed several objects aiming to embed these researches directly as real-time objects usable in MaxMSP and Ableton Live. The team has produced many prototypes of innovative instruments and lightweight embedded deep audio models. The researchers now notably provide the RAVE, AFTER, FlowSynth, and the nn~ library for integration into MaxMSP and Ableton Live.
In addition, the work of the Musical Representations team on generative processes—most recently as part of the ERC REACH (Raising Co-Creativity in Cyber-Human Musicianship) research project—has led to the development of innovative software tools. These tools enable the creation of interactive devices and the exploration of new forms of interaction between composers, musicians, and improvisers. Leveraging artificial listening and the synchronization of musical signals, Dicy2 and Somax2 implement intelligent agents capable of learning either in real-time or beforehand, facilitating immediate or pre-programmed musical interactions in both structured and improvised contexts.
In 2024, all participants enrolled in the “Deep Generative Audio Models and AI in Max and Ableton Live” and “Composing an Interactive Environment and Preparing a Performance with Generative Agents” courses met all or most of the educational objectives, achieving a 100% satisfaction rate.
“The week-long training course on 'Deep Generative Audio Models and AI in Max and Ableton Live' was, without a doubt, one of my most intellectually stimulating and artistically inspiring experiences to date. The quality of the teaching and resources was exemplary. I highly recommend this training to others and will return for future sessions in other areas. Thank you to Philippe and the entire team!” Mark, musician, composer, lecturer, and doctoral researcher. | “The workshop was an incredible experience—the perfect blend of creativity, theory, and technical expertise. The diversity of IRCAM's researchers and teachers, each bringing unique backgrounds and perspectives, made it exceptionally enriching. I now feel encouraged to deepen my experiments with machine learning and sound.” Pedro, Creative Technologist, Deep Generative Audio Models and AI in Max and Ableton Live |
"I'm very happy to have taken this course. Jérôme and Mikhail explained everything to us in detail, and we were able to take advantage of Ircam's facilities to put our new knowledge into practice and exchange ideas. I can only recommend this course to anyone interested in music production in the digital age, and who also wants to combine it with an acoustic instrument and improvised music practice - there's a lot to discover!" Swantje, student, Interacting, composing and improvising with generative agents | “The musical experience was an excellent complement to the technical instruction.” Bruce, Professor of Music Composition, Composing an interactive environment and preparing a performance with generative agents |