The majority of interactions based on movement offer “intuitive” interfaces and trivial gesture vocabulary. While they facilitate the adoption of the system, they also limit the possibility of more complex, expressive, and truly embodied interactions. The ELEMENT project proposes to shift the focus from intuitiveness/naturalness towards learnabiliy. Our project addresses computational problems of methodology and modelling.
Firstly, we must create methods to design movement vocabularies that will be easy to learn and compose in order to build rich and expressive phrases of movements. Secondly, we must design computational models capable of analyzing users’ movements in real-time to provide diverse feedback mechanisms and multimodal guiding (for example visual and auditive).
This project raises three fundamental research issues:
1) How do we conceive movements and gestures, formed with components easy to learn while supporting techniques for complex interactions beyond simple commands?
2) How do we account for the sensory-motor learning with computational modeling of movement and interaction?
3) How do we optimize the feedback systems and computer guides in order to facilitate the acquisition of skills?
The long-term objective is to encourage innovation in multimodal interaction, from non-verbal communication to interaction with digital medias in creative applications.
IRCAM's Team : Sound Music Movement Interaction