Sound Music Movement Interaction

The Sound Music Movement Interaction team (previously known as the Real-Time Musical Interactions team) carries out research and development on interactive systems dedicated to music and performances.

Our work relates to all aspects of the interactive process, including the capture and multimodal analysis of the gestures and sounds created by musicians, tools for the synchronization and management of interaction, as well as techniques for real-time synthesis and sound processing. These research projects and their associated computer developments are generally carried out within the framework of interdisciplinary projects that include scientists, artists, teachers, and designers and find applications in creative projects, music education, movement learning, or in digital audio industrial fields.

Major Themes

Modeling and Analysis of Sounds and Gestures
This theme covers the theoretical developments concerning the analysis of the sound and gesture flow, or more generally, multi-modal temporal morphologies. This research concerns diverse techniques for audio analysis, the study of the gestures of performing musicians or dancers.

Technologies for Multimodal Interaction
This theme concerns our tools for analysis and multimodal recogntion of movements and sound; tools for synchronization (gesture following, for example) and visualization.

Interactive Sound Synthesis and Processing
This focuses essentially on synthesis and sound processing methods based on recorded sounds or large sound bodies.

Systems for Gesture Capture and Augmented Instruments
This theme focuses on the developments the team has made in terms of gestural interfaces and augmented instruments for music and performances.

Specialist Areas

Interactivity, real-time computer science, human-computer interaction, signal processing, motion capture, modeling sound and gesture, statistical modeling and automatic learning, real-time sound analysis and synthesis.

Team Website

  • R-IoT : Carte de captation gestuelle à 9 degrés de liberté avec transmission sans fil  © Philippe Barbosa
    R-IoT : Carte de captation gestuelle à 9 degrés de liberté avec transmission sans fil © Philippe Barbosa
  • Raquettes de Tennis connectées  © Philippe Barbosa
    Raquettes de Tennis connectées © Philippe Barbosa
  • MO - Modular Musical Objects  © NoDesign.net
    MO - Modular Musical Objects © NoDesign.net
  • Projet CoSiMa  © Philippe Barbosa
    Projet CoSiMa © Philippe Barbosa
  • Installation Siggraph, 2014  © DR
    Installation Siggraph, 2014 © DR


Research topics and related projects

Corpus-Based Concatenative Synthesis

Database of recorded sounds and a unit selection algorithm

Gesture analysIs and RecognItIon

Study of instrumental gesture and its relationship with both musical writing and the characteristics of the sound signal

The Augmented Instruments

Acoustic instruments that have been fitted with sensors

European and national projects

Cosima

Collaborative Situated Media

EFFICAC(e)

Extended Frameworks For 'In-Time' Computer-Aided Composition

Element

Stimulate Movement Learning in Humain-Machine Interactions

Gemme

Musical Gesture: Models and Experiments

Legos

Sensori-motor learning in gesture-based interactive sound systems

MICA

Musical Improvisation and Collective Action

MIM

Enhancing Motion Interaction through Music Performance

Music Bricks

Musical Building Blocks for Digital Makers and Content Creators

RapidMix

Real-time Adaptive Prototyping for Industrial Design of Multimodal Interactive eXpressive technology

Skat-VG

Sketching Audio Technologies using Vocalizations and Gestures

Wave

Web Audio: Editing/Visualization


Softwares (design & development)

product

MuBu for Max

free
MuBu (for “multi-buffer) is a set of modules for real-time multimodal signal processing (audio and movement), automatic learning, and sound synthesis via descriptors. Using the multimodal MuBu container users can store, edit, and visualize different types of temporally synchronized channels.
product

Gesture & Sound

free
Two max objects that let you follow temporal morphologies based on Markov models. Software modules for gesture-sound interactions The VoiceFollower allows synchronisation of sound and visual processes with pre-recorded voice. The MotionFollower allows synchronisation of sound and visual processes with pre-recorded movement.
product

CataRT Standalone

included in your membership
Concatenative corpus-based synthesis makes use of a database of recorded sounds and an algorithm for the selection of units that makes it possible to choose the segments of the database in order to synthesize by concatenation a musical sequence. The selection is based on the characteristics of the recording that are obtained by an analysis of th...


Collaborations

Atelier des feuillantines, BEK (Norway), CNMAT Berkeley (United States), Cycling’74 (United States), ENSAD, ENSCI, GRAME, HKU (Netherlands), Hôpital Pitié-Salpêtrière, ICK Amsterdam (Netherlands), IEM (Autria), ISIR-CNRS Sorbonne Université, Little Heart Movement, Mogees (United kingdom/Italia), No Design, Motion Bank (Germany), LPP-CNRS université Paris-Descartes, université Pompeu Fabra (Spain), UserStudio, CRI-Paris université Paris-Descartes, Goldsmiths University of London (United kingdom), université de Genève (Switzerland), LIMSI-CNRS université Paris-Sud, LRI-CNRS université Paris-Sud, Orbe.mobi, Plux (Portugal), ReacTable Systems (Spain), UCL (United kingdom), Univers Sons/Ultimate Sound bank, Universidad Carlos III Madrid (Spain), université de Gênes (Italia), université McGill (Canada), ZhDK (Switzerland).


Publications