In Convergence, Alexander Schubert deconstructs our human perceptions by revealing those produced by the machine. On stage, the sounds are in a constant state of transformation and the musicians interact with their video-generated counterparts, reshaped by a program developed at Ircam that combines deep learning and automatic encoders. The fluidity of identities resulting from the modeling of personal characteristics is the starting point of a broader questioning on the restrictions of our mental constructions.
In the IRCAM-STMS lab we met with the two researchers who worked with the German composer on this work, along with the contribution of computer music designer Benjamin Lévy. Philippe Esling and Antoine Caillon answer us about this challenging art/science collaboration.
From left to right, Alexander Schubert, Philippe Esling and Antoine Caillon
Tell us about your encounter with Alexander Schubert... What kind of expertise did he bring to the Musical Representations team?
Meeting Alexander was the start of a process of mutual discovery, a meeting place and a listening experience free of all references. On our end, we were fascinated by the contemporary and refreshing aspect of Alexander's work (especially Codec Error). In turn, Alexander was eager to understand what artificial intelligence could bring to his art. Even if he came with a clear idea of what Convergence would become, he was not afraid to question our approaches as well as his own. We were able to spend hours discussing conceptual and technical issues, until we reached a practical approach that allowed him to fully explore the models we have developed over the last few years.
composers often come up with ideas that at first seem impractical or crazy, but when studied in more detail, open the door to fundamentally new, and potentially revolutionary, concepts.
What is the role of artificial intelligence (AI) in this creation?
AI is omnipresent in this work, so much so that it seems to take on the roles of composer, conductor, musician, and listener simultaneously. From a purely technical point of view, AI is also present on many fronts, in both sound and visual aspects. On the one hand, AI allows us to generate sounds from Alexander's particular requests (screams, noisy violins), which are scattered throughout the work. Here, AI transcends its role of a "generating machine" because it is used in a context of timbre transfer. In this way it interprets the sound it perceives to present it to us in a new space of its own understanding. This makes it possible to transform a voice into a violin, or a cello into a human scream.
Did you develop a specific program to meet the composer’s expectations?
Because of Alexander's work, we had to rethink our approach to using the tools we develop. After long exchanges during which we compared his artistic vision to our technical reality, we undertook the design of a specific tool to unify our team's AI models within a single graphical interface. This allowed us to integrate these projects in a larger application framework and to explore the creative limits of each model. It was then easy to establish a seamless flow between science and creation. Alexander could then quickly explore new approaches and we could define new models based on his feedback.
Screenshot: Playground, graphical interface based on MAX / MSP allowing an intuitive use of AI model
Your collaboration with Alexander Schubert will continue next season for a new work commissioned by IRCAM for the ManiFeste-2022 festival. As researchers, what do artistic collaborations bring you?
We learn a lot from these collaborations, firstly because the creative exploration of AI models usually leads to pushing them to their limits. So, by looking at the path taken by composers, we can rethink the methods of interaction and the very definition of the models we design. In addition, composers often come up with ideas that at first seem impractical or crazy, but when studied in more detail, open the door to fundamentally new, and potentially revolutionary, concepts.
More info about the 2021 "Digital Musics & Sound Art" award by Ars Electronica.