News

Jules Françoise successfully defended his PhD Thesis “Motion-Sound Mapping by Demonstration”

25 Mar , 2015  

Jules Françoise successfully defended his PhD thesis, “Motion-Sound Mapping by Demonstration”, on March 18. Jules conducted his PhD in the ISMM team with Frédéric Bevilacqua and Thierry Artières.

The Jury was composed by Dr. Catherine ACHARD (ISIR, UPMC), Dr. Olivier CHAPUIS (INRIA, Orsay), Dr. Thierry DUTOIT (University of Mons), Dr. Rebecca FIEBRINK (Goldsmith University of London), Dr. Sergi JORDÀ (Universitat Pompeu Fabra, Barcelona), Dr. Marcelo WANDERLEY (Professeur à l’Université McGill, Montréal).

Abstract:

Designing the relationship between motion and sound is essential to the creation of interactive systems. This thesis proposes an approach to the design of the mapping between motion and sound called Mapping-by-Demonstration. Mapping-by-Demonstration is a framework for crafting sonic interactions from demonstrations of embodied associations between motion and sound. It draws upon existing literature emphasizing the importance of bodily experience in sound perception and cognition. It uses an interactive machine learning approach to build the mapping iteratively from user demonstrations.

Drawing upon related work in the fields of animation, speech processing and robotics, we propose to fully exploit the generative nature of probabilistic models, from continuous gesture recognition to continuous sound parameter generation. We studied several probabilistic models under the light of continuous interaction. We examined both instantaneous (Gaussian Mixture Model) and temporal models (Hidden Markov Model) for recognition, regression and parameter generation. We adopted an Interactive Machine Learning perspective with a focus on learning sequence models from few examples, and continuously performing recognition and mapping. The models either focus on movement, or integrate a joint representation of motion and sound. In movement models, the system learns the association between the input movement and an output modality that might be gesture labels or movement characteristics. In motion-sound models, we model motion and sound jointly, and the learned mapping directly generates sound parameters from input movements.

We explored a set of applications and experiments relating to real-world problems in movement practice, sonic interaction design, and music. We proposed two approaches to movement analysis based on Hidden Markov Model and Hidden Markov Regression, respectively. We showed, through a use-case in Tai Chi performance, how the models help characterizing movement sequences across trials and performers. We presented two generic systems for movement sonification. The first system allows users to craft hand gesture control strategies for the exploration of sound textures, based on Gaussian Mixture Regression. The second system exploits the temporal modeling of Hidden Markov Regression for associating vocalizations to continuous gestures. Both systems gave birth to interactive installations that we presented to a wide public, and we started investigating their interest to support gesture learning.

, , ,


Comments are closed.