A multimodal system for gesture recognition in interactive music performance

Dan Overholt, John Thompson, Lance Putnam, Bo Bell, Jim Kleban, Bob Sturm, Jo Ann Kuchera-Morin

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

A multimodal system was presented for gesture recognition in untethered interactive flute performance. A performer's discrete cues were identified remotely, using computer vision, audio analysis, and electric-field sensing. Continuous expressive gestures were captured in musical performance. Cues and gestures that were typical among performers were used to allow the performer to naturally communicate with an interactive music system in the same way that they communicated with another performer. The system featured custom-designed electronics and software that performed real-time spectral transformation of audio from the flute. The approach also used non-contact sensors, microphones, cameras, and electric-field sensors embedded in a music stand, called the Multimodal Music Stand System (MMSS). The multimodal array of untethered sensors contained within the music stand, providing data to an analysis system that identified a set of predetermined gestures as discrete cues.

Original languageEnglish
Pages (from-to)69-82
Number of pages14
JournalComputer Music Journal
Volume33
Issue number4
DOIs
StatePublished - Dec 2009

Fingerprint

Dive into the research topics of 'A multimodal system for gesture recognition in interactive music performance'. Together they form a unique fingerprint.

Cite this