Abstract
We present the Multimodal Music Stand (MMMS) for the untethered sensing of performance gestures and the interactive control of music. Using e-field sensing, audio analysis, and computer vision, the MMMS captures a performer's continuous expressive gestures and robustly identifies discrete cues in a musical performance. Continuous and discrete gestures are sent to an interactive music system featuring custom designed software that performs real-time spectral transformation of audio.
Original language | English |
---|---|
Pages | 62-65 |
Number of pages | 4 |
DOIs | |
State | Published - 2007 |
Event | 7th International Conference on New Interfaces for Musical Expression, NIME '07 - New York, NY, United States Duration: Jun 6 2007 → Jun 10 2007 |
Conference
Conference | 7th International Conference on New Interfaces for Musical Expression, NIME '07 |
---|---|
Country/Territory | United States |
City | New York, NY |
Period | 06/6/07 → 06/10/07 |
Keywords
- Computer vision
- E-field sensing
- Interactivity
- Multimodal
- Untethered control