Sound Music Movement Interaction
Our work relates to all aspects of the interactive process, including the capture and analysis of gestures and sounds, tools for the authoring of interaction and synchronization, as well as techniques for realtime synthesis and sound processing. These research projects and their associated softwares (MuBu for Max, CataRT, Soundworks), are generally carried out within the framework of interdisciplinary projects that include scientists, artists, teachers, and designers and find applications in creative projects, music education, movement learning, or in medical domains such as physical rehabilitation guided by sound and music.
Major Themes
- Modeling and Analysis of Sounds and Gestures : this theme covers the theoretical developments concerning the analysis of the sound and gesture data, or more generally, multi-modal temporal morphologies. This research concerns diverse techniques for audio analysis, the study of the musician's gestures or dancers.
- Interactive Sound Synthesis and Processing: : this focuses essentially on synthesis and sound processing methods based on recorded sounds or large collections of sound (corpus-based concatenative synthesis)
- Interactive sound systems based on gesture and new instruments : this theme focuses on the design and development of interactive sound environments using gestures, movements, and touch. Interactive machine learning is one of the tools developed in this framework
- Collective musical interaction and distributed systems : this theme addresses questions of musical interactions from a few users to hundreds. It concerns the development of a Web environment combining computers, smartphones, and/or embedded systems making it possible to explore new possibilities for expressive and synchronized interactions.
Specialist Areas
Interactive sound-systems, human-machine interaction, motion capture, modeling sound and gesture, real-time sound analysis and synthesis, statistical modeling and interactive machine learning, sound signal processing, distributed interactive systems.
Collaborations
Atelier des feuillantines, BEK (Norway), CNMAT Berkeley (United States), Cycling’74 (United States), ENSAD, ENSCI, GRAME, HKU (Netherlands), Hôpital Pitié-Salpêtrière, ICK Amsterdam (Netherlands), IEM (Autria), ISIR-CNRS Sorbonne Université, Little Heart Movement, Mogees (United kingdom/Italia), No Design, Motion Bank (Germany), LPP-CNRS université Paris-Descartes, université Pompeu Fabra (Spain), UserStudio, CRI-Paris université Paris-Descartes, Goldsmiths University of London (United kingdom), université de Genève (Switzerland), LIMSI-CNRS université Paris-Sud, LRI-CNRS université Paris-Sud, Orbe.mobi, Plux (Portugal), ReacTable Systems (Spain), UCL (United kingdom), Univers Sons/Ultimate Sound bank, Universidad Carlos III Madrid (Spain), université de Gênes (Italia), université McGill (Canada), ZhDK (Switzerland).
Research topics and related projects
Corpus-Based Concatenative Synthesis
Database of recorded sounds and a unit selection algorithm
The Augmented Instruments
Acoustic instruments that have been fitted with sensors
Axe Son-Musique-Santé
Cet axe transversal regroupe les recherches liées au bien-être et à la santé, effectuées dans le cadre du laboratoire STMS. Cet axe a plusieurs objectifs :
European and national projects
DAFNE+
Decentralized platform for fair creative content distribution empowering creators and communities through new digital distribution models based on digital tokens
DOTS
Distributed Music Objects for Collective Interaction
Element
Stimulate Movement Learning in Humain-Machine Interactions
MICA
Musical Improvisation and Collective Action
Aqua-Rius
Analyse de la qualité audio pour représenter, indexer et unifier les signaux
Softwares (design & development)
MuBu
Gesture and Temporal shape following
CataRT Standalone
Team
Head Researcher : Frederic Bevilacqua
Researchers & Engineers : Jerome Nika, Benjamin Matuszewski, Diemo Schwarz, Riccardo Borghesi, Timo Tukhanen
Engineer : Coralie Vincent
PhD Students : Aliénor Golvet, Sarah Nabi