The fundamental principle of IRCAM is to encourage productive interaction among scientific research, technological developments, and contemporary music production. Since its establishment in 1977, this initiative has provided the foundation for the institute’s activities. One of the major issues is the importance of contributing to the renewal of musical expression through science and technology. Conversely, sp…
IRCAM is an internationally recognized research center dedicated to creating new technologies for music. The institute offers a unique experimental environment where composers strive to enlarge their musical experience through the concepts expressed in new technologies.
In support of IRCAM's research and creation missions, the educational program seeks to shed light on the current and future meaning of the interactions among the arts, sciences, and technology as well as sharing its models of knowledge, know-how, and innovations with the widest possible audience.
Engaged with societal and economic issues at the intersection of culture and IT, research at Ircam has forged a reputation for itself in the world of international research as an interdisciplinary benchmark in the science and technology of sound and music, constantly attentive to the new needs and uses in society.
The fundamental principle of IRCAM is to encourage productive interaction among scientific research, technological developments, and contemporary music production. Since its establishment in 1977, this initiative has provided the foundation for the institute’s activities. One of the major issues is the importance of contributing to the renewal of musical expression through science and technology. Conversely, sp…
IRCAM is an internationally recognized research center dedicated to creating new technologies for music. The institute offers a unique experimental environment where composers strive to enlarge their musical experience through the concepts expressed in new technologies.
In support of IRCAM's research and creation missions, the educational program seeks to shed light on the current and future meaning of the interactions among the arts, sciences, and technology as well as sharing its models of knowledge, know-how, and innovations with the widest possible audience.
Engaged with societal and economic issues at the intersection of culture and IT, research at Ircam has forged a reputation for itself in the world of international research as an interdisciplinary benchmark in the science and technology of sound and music, constantly attentive to the new needs and uses in society.
Carried out by saxophonist Rémi Fox and researcher/musician Jérôme Nika, the "Artificial Hippocampus" project is the result of nearly 20 years of research by the Musical Representations team, under the leadership of Gérard Assayag. A quick historical overview.
The "Artificial Hippocampus" project is first of all based on the duo "C'est pour ça", formed by Rémi Fox and Jérôme Nika, but more importantly on a software environment in constant evolution: Dicy2. An environment on which Jérôme Nika is building and is based on a research dynamic spanning nearly 20 years which has, over the years, generated numerous avatars, or incarnations, depending on the context.
Rémi Fox and Jérôme Nika, “C’est pour ça” at the Festival Improtech Paris-Athina 2019, cultural center Onassis, Athens, Greece, September 27, 2019.
"These are different instruments or musical agents," explains Jérôme Nika. "Each one is a specific variation, within the framework of a larger project of musical research intended to investigate how to interact with a musical 'memory' in different contexts. Dicy2 is actually the working title of the research project that led to the creation of this instrument! If only I had known that the name would stick... "
The first avatar produced within the framework of this project came into being thanks to Gérard Assayag, about twenty years ago. The objective of the Musical Representations team was to model a musical memory by analyzing its internal patterns, and then to let the machine improvise by "wandering around in a non-linear and playful way, in order to create material that was not only new, but also relevant from the point of view of a listener's perceptions. It was a way to privilege a certain form of continuity in the discourse that was produced," summarizes Jérôme Nika.
This first prototype is already providing good results, but the researchers are still hungry for more. They would like, firstly, for the machine to be able to listen live to a musician and react to what they propose, and secondly, to develop tools for compositional control of the speech produced. At this stage of its development, the tool in question is only equipped with low-level controls, such as "Play" or "Don't play", "Feed from this part of your memory", or "from this other part".
Exemple de patch tutoriel utilisant les objets de la librairie : 1 agent embarquant un modèle appris sur l’analyse automatique d’un fichier audio, un module de requêtes manuelles et requêtes provenant d’une écoute réactive d’un flux audio analysé en temps réel, un module de restitution audio basé sur le moteur CataRT (exemple issu de l’utilisation de la librairie Dicy2 par le saxophoniste Rémi Fox lors d’une performance à l’Académie de Darmstadt, juillet 2018).
While this initial prototype will continue to make advances, and will incorporate new features over time, two others are being launched in parallel. One aims at developing a type of listening that reacts to the environment through a real-time analysis of a musician who influences the way the machine navigates through its musical memory. The other aims at being able to plan the machine's improvisation. "The idea," says Jérôme Nika, "is to ask the machine to produce a discourse that will follow, for example, a predetermined audio profile, or a harmonic evolution. In fact, we are approaching a sort of compositional process."
"Dicy2 is the latest of these different research projects: it does not replace the previous ones, but explores the subject in another direction, which is its own, while sharing the same theoretical bases. We now know how to model a musical memory, we know how to react to a musician, we know how to follow a scenario. It is now a question of reconciling the two approaches by moving between the two, in a sort of continuum: stimulating a memory via the analysis of a musician's playing, but also according to a compositional structure, with the help of numerous descriptors."
"Rémi and I want to make music together," insists the researcher, "by freeing ourselves from the demonstrative side of the possibilities offered by these new technologies, and by using them for what they are: instruments. What musical voices can be created by interacting in this way? I don't think that the music produced is really "new", but the playing practices certainly are. I remember for example a recent improvisation session at the Ars Electronica Festival in Austria. Rémi was playing something very melodic and harmonic and I wanted to follow him: I threw myself 100% into what he was doing, I configured my instrument/machine to listen very melodically and harmonically to what Rémi was playing, and in turn to produce an excess of melodic and harmonic material. I don't know why, maybe because he was off on something else, or just out of contradiction, Rémi then started playing in a very choppy and abrupt way, with slaps, licks, noises... and the machine had a completely unexpected reaction, which we then had to play with."