The majority of interactions based on movement offer “intuitive” interfaces and trivial gesture vocabulary. While they facilitates the adoption of the system, they also limit the possibility of more complex, expressive, and truly embodied interactions.
We propose going from “intuitive” notions to notions of “learnability”. Our project addresses computational problems of methodology and modelling.
Firstly, we must create methods to design movement vocabularies that will be easy to learn and compose in order to build rich and expressive phrases of movements. Secondly, we must design computational models capable of analyzing users’ movements in real-time to provide diverse feedback mechanisms and multimodal guiding (for example visual and auditive).
This project raises three fundamental research issues:
- How do we conceive movements and gestures, formed with components easy to learn while supporting techniques for complex interactions beyond simple commands?
- How do we account for the sensory-motor learning with computational modeling of movement and interaction?
- How do we optimize the feedback systems and computer guides in order to facilitate the acquisition of skills?
The long-term objective is to encourage innovation in multimodal interaction, from non-verbal communication to interaction with digital medias in creative applications.