Sound Music Movement Interaction
Our work relates to all aspects of the interactive process, including the capture and multimodal analysis of the gestures and sounds created by musicians, tools for the synchronization and management of interaction, as well as techniques for real-time synthesis and sound processing. These research projects and their associated computer developments are generally carried out within the framework of interdisciplinary projects that include scientists, artists, teachers, and designers and find applications in creative projects, music education, movement learning, or in digital audio industrial fields.
Modeling and Analysis of Sounds and Gestures
This theme covers the theoretical developments concerning the analysis of the sound and gesture flow, or more generally, multi-modal temporal morphologies. This research concerns diverse techniques for audio analysis, the study of the gestures of performing musicians or dancers.
Technologies for Multimodal Interaction
This theme concerns our tools for analysis and multimodal recogntion of movements and sound; tools for synchronization (gesture following, for example) and visualization.
Interactive Sound Synthesis and Processing
This focuses essentially on synthesis and sound processing methods based on recorded sounds or large sound bodies.
Systems for Gesture Capture and Augmented Instruments
This theme focuses on the developments the team has made in terms of gestural interfaces and augmented instruments for music and performances.
Interactivity, real-time computer science, human-computer interaction, signal processing, motion capture, modeling sound and gesture, statistical modeling and automatic learning, real-time sound analysis and synthesis.
Research topics and related projects
Database of recorded sounds and a unit selection algorithm
Study of instrumental gesture and its relationship with both musical writing and the characteristics of the sound signal
Acoustic instruments that have been fitted with sensors
European and national projects
Collaborative Situated Media
Extended Frameworks For 'In-Time' Computer-Aided Composition
Stimulate Movement Learning in Humain-Machine Interactions
Sensori-motor learning in gesture-based interactive sound systems
Musical Improvisation and Collective Action
Enhancing Motion Interaction through Music Performance
Musical Building Blocks for Digital Makers and Content Creators
Real-time Adaptive Prototyping for Industrial Design of Multimodal Interactive eXpressive technology
Sketching Audio Technologies using Vocalizations and Gestures
Softwares (design & development)
MuBu for Max
Gesture & Sound
Atelier des feuillantines, BEK (Norway), CNMAT Berkeley (United States), Cycling’74 (United States), ENSAD, ENSCI, GRAME, HKU (Netherlands), Hôpital Pitié-Salpêtrière, ICK Amsterdam (Netherlands), IEM (Autria), ISIR-CNRS Sorbonne Université, Little Heart Movement, Mogees (United kingdom/Italia), No Design, Motion Bank (Germany), LPP-CNRS université Paris-Descartes, université Pompeu Fabra (Spain), UserStudio, CRI-Paris université Paris-Descartes, Goldsmiths University of London (United kingdom), université de Genève (Switzerland), LIMSI-CNRS université Paris-Sud, LRI-CNRS université Paris-Sud, Orbe.mobi, Plux (Portugal), ReacTable Systems (Spain), UCL (United kingdom), Univers Sons/Ultimate Sound bank, Universidad Carlos III Madrid (Spain), université de Gênes (Italia), université McGill (Canada), ZhDK (Switzerland).