The multimodal music stand

Bo Bell, Jim Kleban, Dan Overholt, Lance Putnam, John Thompson, Joann Kuchera-Morin

Research output: Contribution to conferencePaperpeer-review

11 Scopus citations

Abstract

We present the Multimodal Music Stand (MMMS) for the untethered sensing of performance gestures and the interactive control of music. Using e-field sensing, audio analysis, and computer vision, the MMMS captures a performer's continuous expressive gestures and robustly identifies discrete cues in a musical performance. Continuous and discrete gestures are sent to an interactive music system featuring custom designed software that performs real-time spectral transformation of audio.

Original languageEnglish
Pages62-65
Number of pages4
DOIs
StatePublished - 2007
Event7th International Conference on New Interfaces for Musical Expression, NIME '07 - New York, NY, United States
Duration: Jun 6 2007Jun 10 2007

Conference

Conference7th International Conference on New Interfaces for Musical Expression, NIME '07
Country/TerritoryUnited States
CityNew York, NY
Period06/6/0706/10/07

Keywords

  • Computer vision
  • E-field sensing
  • Interactivity
  • Multimodal
  • Untethered control

Fingerprint

Dive into the research topics of 'The multimodal music stand'. Together they form a unique fingerprint.

Cite this