Le son au bout des doigts is an installation shown 1.6.–18.6. at the Centre Pompidou and part of Ircam’s Manifeste festival using ISMM’s gesture recognition and interactive sound synthesis technology.
It offers a playful, interactive, sonorous and visual journey for children ages two and up. Through manipulations and listening, the children’s sight and hearing is solicited. In Topo-phonie Café imagined by B. MacFarlane, the children are guided by a game of organic structures. They set special tables that create sonorous suprises.
With DIRTI developed by User Studio/Matthieu Savary, when children sink their hands in different materials that fill interactive tubs they set off sounds and images. More…
The XMM library contains different probabilistic models for motion recognition and Mapping between movement and media. The XMM library was developed for movement interaction in creative applications and implements an interactive machine learning workflow with fast training and continuous, real-time inference. XMM is a portable, cross-platform C++ library that implements Gaussian Mixture Models and Hierarchical Hidden Markov Models for recognition and regression.
This library is one of the output of the Jules Françoise‘s PhD. It is an open-source C++ library (with a possible proprietary close license for commercial applications).
MaxMSP implementations are freely available in the MuBu & Friend package, as explained here.
More information here http://julesfrancoise.com/xmm/
https://github.com/Ircam-RnD/xmm
Jules Françoise successfully defended his PhD thesis, “Motion-Sound Mapping by Demonstration”, on March 18. Jules conducted his PhD in the ISMM team with Frédéric Bevilacqua and Thierry Artières.
The Jury was composed by Dr. Catherine ACHARD (ISIR, UPMC), Dr. Olivier CHAPUIS (INRIA, Orsay), Dr. Thierry DUTOIT (University of Mons), Dr. Rebecca FIEBRINK (Goldsmith University of London), Dr. Sergi JORDÀ (Universitat Pompeu Fabra, Barcelona), Dr. Marcelo WANDERLEY (Professeur à l’Université McGill, Montréal). More…
MaD allows for simple and intuitive design of continuous sonic gestural interaction. The motion-sound mapping is automatically learned by the system when movement and sound examples are jointly recorded. In particular, our applications focus on using vocal sounds – recorded while performing action, – as primary material for interaction design. The system integrates of specific probabilistic models with hybrid sound synthesis models. Importantly, the system is independent to the type of motion/gesture sensing devices, and can directly accommodate the use different sensors such as, cameras, contact microphones, and inertial measurement units. The application concerns performing arts, gaming but also medical applications such auditory-aided rehabilitation. More…
We just released the beta version of mubu.*mm, a set of objects for probabilistic modeling of motion and sound relationships. More…