The project on creative dynamics for improvised interaction focuses on the creation, adaptation, and implementation of effective models for artificial listening, learning, and interaction as well as the automatic generation of musical contents enabling the creation of digital musical avatars that are creative and either capable of being incorporated in an interactive and artistically convincing manner within a range of human systems such as live performances, (post) production, or education or to contribute perceptive and communicative skills to cyber-physical systems.
Project Description & Goals
The project highlights improvised interaction, both as an anthropological and cognitive model of action and decision, as a schema of discovery and unsupervised learning and as
a discursive tool for human— digital artifact interaction with the aim of modeling style and interaction.
The objective is to produce creative agents that become autonomous through direct learning resulting from contact with live performances by human improvising musicians, creating a loop of stylistic retroaction via the simultaneous exposition of humans to the productions of digital artifacts that improvised themselves. This creates a situation of human-artifact communication that evolves in a complex dynamic. Off-line learning with archives can also be anticipated to systematically "color" the digital individuality of the agents or to situate the experience within different genres (jazz, classic, pop, etc.). The live performance situation could be extended to novel applications such as interaction with users that have a range of skills, with audio-visual archives dynamically resuscitated in artistic or educations scenarios of co-improvisation, as well as in the general situation of new narrative forms of the work in interactive/generative digital media and virtual reality.
The goal is also to constitute procedural knowledge of music through this interaction and to produce a rich instantaneous human-digital experience, likely to provide aesthetic satisfaction for the user, to enrich his sound and musical production, to implement a dialog with him, to imitate or contradict him, and in general, to stimulate and revitalize the experience of collective performance. This human-artifact interaction will be extended to artifactartifact interaction in diverse configurations (several humans, a network of several digital artifacts). During the experiment, an autonomous musical individuality will form an individuality capable of intervening in a plausible fashion in situations of collective interaction.
A creative entity in an audio-musical context thus subsumes a collection of concurrent, contributing, and competitive agents capable of interactive learning, of taking charge of artificial learning tasks, of discovering short and longterm temporal structures, of modeling style, of generating symbolic sequences, of real-time audio rendering, but also of visualization and human-machine interfaces.
Project reference : ANR-14-CE24-0002-01.