Jazz Ex Machina
On February 11, as part of the Présences festival, Steve Lehman and Frédéric Maurin, along with the Orchestre National de Jazz, will present Ex Machina, a singular creation composed with the help of, and involving in real time, the DYCI2 environment developed by Jérôme Nika and the Musical Representations team at IRCAM. The two musicians reveal their creative process.
From left to right: Steve Lehman and Frédéric Maurin à l'Ircam © IRCAM - Centre Pompidou
What were your prior experiences interacting with computer tools and what did you learn from them?
Frédéric Maurin: I learned how to use Max at IRCAM a few years ago and I used it in many compositions during my work with Ping Machine, the orchestra I was conducting before the ONJ. But there was no notion of interaction or "decision making" on the part of the machine: everything was controlled. The conclusion I drew was that I needed specialists in these technologies to be able to go further.
Steve Lehman: I am familiar with a number of software environments, including Max which I used frequently, as well as Orchidea to compose the music for Ex Machina. I have been dabbling in forms of computer-assisted improvisation since at least 2005; first under the tutelage of George Lewis as a doctoral student in composition at Columbia University in New York, and later in collaboration with the likes of David Wessell and Gérard Assayag at IRCAM. But I've also been working on DYCI2— in close collaboration with researcher Jérôme Nika—since 2016. So, I am more or less aware of the state of the art and its potential. The process of working with these new technological resources is quite often quite slow and incremental. But overall, I am very happy with the progress Jérôme and I have made in developing increasingly refined and defined concepts regarding the underlying aesthetics and philosophies that have guided our work on human/machine interaction and musical improvisation. Working with DYCI2 has opened up a new range of listening modes for me. And, as a result, a new mode of composition and real-time composition also known as "improvisation".
Photo : The DYCI2 environnement, work session at Ircam, screen capture from the film Images of Work #29 © IRCAM - Centre Pompidou
Are you interested in how the machine works?
F.M.: For me, it is crucial to try to understand how the machine works. Without that, you can't give it the right material to learn and you risk getting lost in the composition. It would be like writing for an instrument without knowing how it works: you risk doing something stupid!
On the other hand, understanding how it works can help to frame what one can hope to obtain from it and thus to develop a coherent writing process.
It is a very empirical learning process, with many failures - for me at least.
What does the machine let you do that you couldn't do otherwise?
S.L.: It makes it possible to calculate and/or sequence things at an incredible speed. In musical terms, this often translates into an almost instantaneous analysis of extremely complex sounds and/or the proposal of hundreds of potential solutions for a particular musical problem, in one click. The computer is also a tool that brings a singular objectivity and bias, quite different from mine. And this alone is often very useful when evaluating a musical material, when comparing it to another and when looking for links between them.
F.M.: Its real-time analysis and calculation capabilities allow us to track parameters that a human could not. Similarly, the computer can use very complex and mixed sound memories to generate its responses, which leads to reactions that a human could not imagine.
Were you surprised by the machine's suggestions and reactions?
F.M.: Yes, and thankfully so. I think that's a very big part of what we're looking for. If you don't want to be surprised, you might as well write all the music!
However, the difficulty of the process we set up is to have consistent behavior, from one time to the next, without depriving ourselves of the element of surprise, which is why the fine tuning that Jérôme does on the machine is so important. Sometimes the surprise is that nothing works because either the settings are not right or we don't have the right material to give to the machine. In most cases, it goes in the direction we had imagined, but not exactly either, and we end up somewhere else.
Obviously in the case of interactions with a soloist, the soloist also has a part of responsibility in this process. From this point of view, we are looking for what we are looking for in any form of improvisation: spontaneity, and a unique character of the musical moment.
S.L.: The notion of surprise is central to any endeavor involving human/computer interaction. The short answer is yes, I have been surprised many times by the range of different discourses the computer offers - the connections it makes between two seemingly disparate sound objects, for example. It is an incredibly delicate balance that one must be careful to maintain, which implies making decisions based on one's own aesthetic sense and musical tastes. At the same time, you have to let the computer run free to surprise you, to give it the space to help you discover something new about your own musical personality and potential. The potential for surprise and discovery in DYCI2 is phenomenal! Sometimes they are very pleasant surprises. Sometimes not so much. But after several years of working with Jérôme Nika on DYCI2, I have the feeling that we are more and more effective in finding good surprises.
Photo: Steve Lehman in the studio at IRCAM, screen capture of the film Images of Work #29 © IRCAM - Centre Pompidou
During the performance, did you have the feeling of really interacting with a machine? How does it differ from improvising with a "human" musician?
S.L.: That's a good question. And, at the same time, if you want to improvise with another human musician, you can do that easily. In many ways, this brings us back to the more or less obsolete human/machine duality. Of course, much can be learned about the nature of musical improvisation by studying and modeling machine behavior based on highly nuanced musical aesthetics and the nature of human cognition and perception. But at this point, I can safely say that computer-guided improvisation can teach us much about the nature of musical improvisation that human-to-human improvisation cannot. Beginning with the knowledge that the computer has no feelings! It can play whatever I ask it to play, without me having to worry about whether it likes it, or whether I offend it, because I play faster than it can, or because I have knowledge that it does not have. So that's already a good starting point.
F.M.: I agree with Steve: the interest of working with the machine is precisely that it will have inhuman behaviors and superhuman capacities.
The Orchestre National de Jazz Concert, Ex Machina, repetitions at the Maison de la radio, screen capture of the film Images of Work #29 © IRCAM - Centre Pompidou
Are there any aspects of the machine that you would like to see developed?
S.L.: Of course. In fact, I think adding more and more ultra-sophisticated real-time features to DYCI2 is a common goal for a lot of people. And the environment has already advanced a fair amount in this regard since my first tests in 2016. The areas of rhythm and rhythmic intelligence have some of the richest and broadest potential for improvement. The nature of rhythm perception and sophisticated rhythmic training is incredibly complex to model in the computer space.
F.M.: Today, without a "click" to give it the tempo, the computer cannot follow a human drummer in real-time, which raises important technical issues for speech analysis, especially in real-time. Moreover, rhythm is also time, so the machine has difficulty anticipating what will happen next. However, on the one hand, the drums and their rhythmic language are at the heart of the evolution of the music we practice and, on the other hand, in improvisation, the interaction between the soloist and the drums is a central element implying many rhythmic subtleties, a great rigor in the tempo but also, sometimes, a lot of freedom.
Moreover, the question of the calculation limit is always an issue: on a program like ours, with about an hour and fifteen minutes of music, we start to push the limits of the machine. It's like writing an orchestra, tell a composer that he has two horns, he'll want three, and when there are three, he'll want four!
We still have a long way to go - so much the better!
Interview by Jérémie Szpirglas
This initiative uses the research and software from the REACH project by IRCAM's Musical Representations team directed by Gérard Assayag.