Musical Representations
Reflection on the high-level representation of musical concepts and structures supported by original computer languages developed by the team, leads to the conception of models which can be used for musical analysis and creation.
On the musicology side, tools for representation and modeling enable a truly experimental approach that significantly rejuvenates this discipline.
On the creation side, the objective is to design musical companions that interact with composers, musicians, sound engineers, etc. throughout the musical workflow. The software developed has been distributed to a large community of musicians, materializing original forms of thought connected to the particular characteristics of the computer supports they represent (and execute): the final score, the score’s different levels of formal elaboration, its algorithmic generators, and live interaction during a performance.
For the past few years, the team has worked on symbolic interaction and artificial creativity in its work via projects on artificial listening, synchronization of musical signals and score following (a subject that led to the creation of an INRIA team-project), orchestration assistance (using the analysis of temporal series and deep learning techniques), and engineering intelligent agents capable of listening, learning, and musical interaction in improvised contexts.
The team has a long history of collaborations with composers and musicians both from IRCAM and elsewhere. Archives of this work can be found in three volumes of the OM Composer’s Book, guaranteeing its international dissemination and continuity.
Major Themes
- Computer-assisted composition: assisted composition, orchestration assistance
- Control of synthesis and spatialization, creative systems to write for time, sound, space, and interaction
- Mathematics and music
- Computer languages for music: Open Music, Antescofo
- Modeling style, dynamics of improvised interaction: improvised musical interactions, DYCi2
- New interfaces for composers and teaching
- Musicology and computational analysis
- Efficient search of temporal series
- Writing synchronous time
- Premier collage de l'identité du logiciel OpenMusic par A. Mohsen © Philippe Barbosa
- Mederic Collignon improvise en concert avec le logiciel OMax (RIM B. Lévy)
- Moreno Andreatta, chercheur CNRS-Ircam dans son bureau © Philippe Barbosa
- OM Composer's Book
- Claude Delangle en studio avec le logiciel Antescofo © Inria / H. Raguet
- Papier Intelligent, écriture de l’espace dans la composition © Inria
Specialist Areas
Computer-assisted composition and analysis, computer musicology, cognitive musicology, artificial intelligence, computer languages, algebraic and geometric methods, symbolic interactions, languages for synchronous time and tempered time, executable notations
Research topics and related projects
Computer-Assisted Composition (OpenMusic)
Design models techniques adapted to the creative process, incorporating paradigms for calculations as well as musical interactions and representations
Computer-Assisted Composition (Orchids)
Orchestration via an automatic search of instrumentation and layering instruments approaching a target defined by the composer
Effective Searches of Temporal Series
Code that makes it possible to carry out effective searches on temporal forms
Improvised Musical Interactions - OMax & Co
Development of improvised man-machine musical interactions
Mathematics and Music
Algebraic Models, Topologies, and Categories in Computational Musicology
Orchestration Assistance
Tool to assist orchestration that is integrated in the software created at IRCAM
Writing for Sound and Space
New approach to the representation of sounds via high-level programs
Writing Timed Interactions and Musical Synchronizations
Definition of complex interactions between performers and live electronics
Somax2
Somax 2 is an application for musical improvisation and composition. It is implemented in Max and is based on a generative model using a process similar to concatenative synthesis to provide stylistically coherent improvisation, while listening to and adapting to a musician (or any other type of audio or MIDI source) in real-time. The model is operating in the symbolic domain and is trained on a musical corpus, consisting of one or multiple MIDI files, from which it draws its material used for improvisation. The model can be used with little configuration to autonomously interact with a musician, but it also allows manual control of its generative process, effectively letting the model serve as an instrument that can be played in its own right.
European and national projects
ACIDS
Artificial Creative Intelligence and Data Science
REACH
Raising Co-Creativity in Cyber-Human Musicianship
COSMOS
Computational Shaping and Modeling of Musical Structures
Dyci2
Creative Dynamics for Improvised Interaction
EFFICAC(e)
Extended Frameworks For 'In-Time' Computer-Aided Composition
Heart.FM
Maximizing the Therapeutic Potential of Music through Tailored Therapy with Physiological Feedback in Cardiovascular Disease
Houle
Hierarchical Object based Unsupervised Learning
Inedit
Interactivity in Writing of Interaction and Time
MAKIMOno
Create the first partnership of a true scientific theory of orchestration
MERCI
Mixed Musical Reality with Creative Instruments
Orchestration
Comparison of the state of the art in musicology, in psychology of perception, and in computer music to create new tools
REACH
Raising Co-Creativity in Cyber-Human Musicianship
Softwares (design & development)
OpenMusic
Antescofo
OMax
Orchids
Musique Lab 2.0
Team
Head Researcher : Gerard Assayag
Researchers & Engineers : Mikhail Malt, Jerome Nika, Carlos Agon Amado, Jean-Louis Giavitto, Karim Haddad, Marco Fiorini, Emily Graber, Emma Frid
Doctoral Students : Yohann Rabearivelo, David Genova, Nils Demerlé, Giovanni Bindi, , Gonzalo Romero, Paul Lascabettes, Antoine Caillon
Trainees : Pierre Rodriguez,
: Ninon Devis, Constance Douwes
Administrative : Vasiliki Zachari
: Sasha J. Blondeau, Claudy Malherbe
Associated Researcher : Georges Bloch
Specialist Areas
Computer-assisted composition and analysis, computer musicology, cognitive musicology, artificial intelligence, computer languages, algebraic and geometric methods, symbolic interactions, languages for synchronous time and tempered time, executable notations
Collaborations
Bergen Center for Electronic Arts (Norvège), CIRMMT/McGill University (Canada), City University London, CNSMDP, Columbia New York, CNMAT/UC Berkeley, Electronic Music Foundation, Gmem, Grame Lyon, École normale supérieure Paris, EsMuC Barcelone, Harvard University, Inria, IReMus – Sorbonne Paris-4, Jyvaskyla University, univ. de Bologne, USC Los Angeles, université Marc Bloch Strasbourg, Pontificad Javeriana Cali, université Paris-Sud Orsay, université de Pise, UPMC Paris, UCSD San Diego, Yale, U. Minnesota, U. Washington.