Gesture Follower

Real-time following and recognition of time profiles

Principles

The Gesture Follower allows for real-time comparison between a gesture performed live with a set of prerecorded examples. The implementation can be seen as an hybrid between DTW (Dynamic Time Warming) and HMM (Hidden Markov Models).

In most standard gesture recognition systems, gestures are considered as units that must be recognised once completed. Therefore, these systems output results at discrete time events, typically at the end of each gesture.

The Gesture Follower corresponds to a different interaction paradigm  motivated by applications on expressive visuals and sound control: the system outputs “continuously” (i.e. on a fine temporal grain) parameters characterising the performed gesture.

Precisely, two types of information are continuously updated. These are probabilistic estimations of

1) the similarity of the performed gesture to prerecorded gestures (likelihood) and

2) the time progression of the performed gesture. The first type of information allows for the selection the likeliest gesture at any moment and the second type of information allow for the estimation of the current temporal index inside the gesture, referred here as “gesture following”.

These continuous output data are especially well suited for both selecting and synchronizing various continuous visual or sound processes to gestures.

Demos

Basic Example

The Gesture Follower is a system for real-time following and recognition of time profiles. In this example the Gesture Follower learns three gestures, i.e drawings using the mouse, while we simultaneously record voice data.

During the “performance”, the Gesture Follower recognizes which gesture is being performed, and plays the corresponding sound, time stretched or compressed depending on the pacing of the gesture.

Synchronizing dance and videos

The gesture follower is used to select and synchronize prerecorded videos, following the dancer gestures. It uses data from inertial sensors worn on the wrists of the dancer

Software

The gf external is now part of the “MuBu for Max” distribution you can access directly here:
http://forumnet.ircam.fr/shop/en/forumnet/59-mubu-pour-max.html

Please post any comment, bug report, feature request to the forumnet.ircam.fr
http://forumnet.ircam.fr/user-groups/mubu-for-max/

Gfhelp-screenshot

 

References

  • F. Bevilacqua, N. Schnell, N. Rasamimanana, B. Zamborlin, and F. Guédy, “Online Gesture Analysis and Control of Audio Processing,” in Musical Robots and Interactive Multimodal Systems: Springer Tracts in Advanced Robotics Vol 74, J. Solis and K. C. Ng, Eds., Springer Verlag, 2011, pp. 127-142.
    [BibTeX] [Download PDF]
    @incollection{Bevilacqua11b,
    author = {Bevilacqua, Frédéric and Schnell, Norbert and Rasamimanana, Nicolas and Zamborlin, Bruno and Guédy, Fabrice},
    editor = {Jorge Solis and Kia C. Ng},
    title = {Online Gesture Analysis and Control of Audio Processing},
    booktitle = {Musical Robots and Interactive Multimodal Systems: Springer Tracts in Advanced Robotics Vol 74},
    pages = {127-142},
    publisher = {Springer Verlag},
    year = {2011},
    url = {http://articles.ircam.fr/textes/Bevilacqua11b/index.pdf},
    }

  • F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guédy, and N. Rasamimanana, “Continuous realtime gesture following and recognition,” in Gesture in Embodied Communication and Human-Computer Interaction: Lecture Notes in Computer Science (LNCS) volume 5934, Springer Verlag, 2010, pp. 73-84.
    [BibTeX] [Download PDF]
    @incollection{Bevilacqua09b,
    author = {Bevilacqua, Frédéric and Zamborlin, Bruno and Sypniewski, Anthony and Schnell, Norbert and Guédy, Fabrice and Rasamimanana, Nicolas},
    title = {Continuous realtime gesture following and recognition},
    booktitle = {Gesture in Embodied Communication and Human-Computer Interaction: Lecture Notes in Computer Science (LNCS) volume 5934},
    pages = {73-84},
    publisher = {Springer Verlag},
    year = {2010},
    url = {http://articles.ircam.fr/textes/Bevilacqua09b/index.pdf},
    }

  • F. Bevilacqua, F. Guédy, E. Fléty, N. Leroy, and N. Schnell, “Wireless sensor interface and gesture-follower for music pedagogy,” in International Conference on New Interfaces for Musical Expression, New York, USA, 2007.
    [BibTeX] [Download PDF]
    @inproceedings{Bevilacqua07a,
    author = {Bevilacqua, Frédéric and Guédy, Fabrice and Fléty, Emmanuel and Leroy, Nicolas and Schnell, Norbert},
    title = {Wireless sensor interface and gesture-follower for music pedagogy},
    booktitle = {International Conference on New Interfaces for Musical Expression},
    address = {New York, USA},
    year = {2007},
    url = {http://articles.ircam.fr/textes/Bevilacqua07a/index.pdf},
    }

  • F. Bevilacqua and R. Müller, “A Gesture follower for performing arts,” in Gesture Workshop, 2005.
    [BibTeX] [Download PDF]
    @inproceedings{Bevilacqua05b,
    author = {Bevilacqua, Frédéric and Müller, Remy},
    title = {A Gesture follower for performing arts},
    booktitle = {Gesture Workshop},
    month = {Mai},
    year = {2005},
    url = {http://articles.ircam.fr/textes/Bevilacqua05b/index.pdf},
    }

  • R. Müller, “Human Motion Following using Hidden Markov Models,” DEA Images et Systèmes Master Thesis, 2004.
    [BibTeX] [Download PDF]
    @mastersthesis{Muller04a,
    author = {Müller, Remy},
    title = {Human Motion Following using Hidden Markov Models},
    school = {INSA Lyon, Laboratoire CREATIS},
    type = {DEA Images et Systèmes},
    year = {2004},
    url = {http://articles.ircam.fr/textes/Muller04a/index.pdf},
    }