For my Masters in Music Thesis, specialization in Sonic Arts, I propose a creative process for designing a DMI (Digital Musical Instrument) through an evaluation of sensors and in the context of interactive computer music performance. In this process the author uses the affordances and limitations of different types of sensors to generate ideas for possible DMI designs. The chosen design to construct is the GLOBE (Gesture-sensing, Luminous, Oscillating Ball Environment). It uses an Adafruit Feather HUZZAH with ESP8266, 16 FSRs that are connected with a CD74HC4067 16-channel analog multiplexer, an Adafruit TCS34725 colour sensor, a WS2812B LED strip, a Sparkfun LIS3DH 3-axis accelerometer, and a Sharp IR distance sensor (4-30cm). The sensors and physical geometry of the GLOBE supports many gestures and instrumental techniques for the performer to control sound in real time. Two Etudes performed with the GLOBE premiered on YouTube Live on May 22nd, 2020. My thesis performance was live streamed due to COVID-19. The online audience was asked to complete a voluntary survey to provide their feedback on the instrument and performance. The evaluation of the GLOBE as an effective musical instrument includes their feedback, as well as my perspective as the designer, composer/programmer, and performer.
With the GLOBE I was a finalist at the Guthman Musical Instrument Competition 2022. More info on the competition can be found here: https://guthman.gatech.edu/2022-competition
Etude 1: The first etude exemplifies how the GLOBE spinning in its original hamster ball stand. The etude is minimalistic in that it mostly employs one-to-one mapping of the y-axis of the accelerometer, and the FSRs; it does not use the colour sensor or the IR sensor. Compositionally, this piece is partly improvised. There are two main sound engines within Etude 1: droning synths and Karplus Strong synthesis. I adapted the drone from Samuel Pearce-Davies’ drone synth Max patch. My iteration is a combination of seven rectangle and sine waves that are slightly detuned from one another. Each of the rectangle waves and sine waves are added together with a sawtooth wave. By randomly changing the multipliers of the waveforms, they move in between the sawtooth waveform and the rectangle or sine waveforms which result in a change of timbre. These changes in the multiplier ramp smoothly from one value to the next with my [CK_Line] abstraction. I mapped the FSRs on the left side of the GLOBE to the pitch of these combined synths, by converting the FSRs’ data to individual MIDI numbers. The last two FSRs toggle the master gain of the synths to ramp on/off. The Y axis of the accelerometer is also mapped to the cut-off frequency of the filters for both the Karplus Strong engine and the drone.
Etude 2: The second etude uses all of the sensors in the GLOBE and is more complicated in its playing techniques and sensor mappings. I base my decisions for mapping sounds to colour on objects and environments that are both associated with the colour and the sound. There is a bed track for each colour and a collection of shorter samples that play overtop. Each group of sounds pass through a unique process and effects so that they each sound very distinct from one another. The sounds mapped to red are crackling fire sounds, firetrucks, heartbeats, and cardinal bird chirps. This red group passes through granular synthesis. I mapped the Y axis of the accelerometer to change the maximum scaled value of the grain’s speed. I also mapped the IR sensor to change the maximum scaled value of the pitch and the maximum scaled value of the length of the grains. The green group contains an ambient forest soundscape with glass, sand, and shell samples. The Y axis of the accelerometer is mapped to the speed of the bed track and the IR sensor pitch shifts the bed track. Using [mc] objects, I make six instances of the pitch shifted bed track and pass them through a resonant bandpass filter. The FSRs are mapped to the six center frequencies of the filters and the Z axis of the accelerometer is mapped to the Q factor of the filters. Various flowing water sounds are mapped to blue. The Y axis of the accelerometer pitch shifts the bed track. The sound then goes through a flanger. The X axis of the accelerometer is mapped to the delay time of the flanger and the IR is mapped to the intensity of the flanger. The sounds in the yellow group are more abstract than the sounds in the other colour groups there are fewer yellow objects or animals that make distinct, recognizable sounds. There are buzzing bees, a rubber duck, canary chirps, dog barks and a baby’s laughter. I set the bees bed track to go through a series of delays with varying rates as well as time stretching and a comb filter. The X axis of the accelerometer changes the time stretching and the Y axis of the changes the rate of the delay. When the delay times change, it creates a glissando effect. The Z axis is mapped to a lowpass filter that smooths out the transitions between delay times. The IR sensor is mapped to the feedback gain of the comb filter. I use a polyphonic sample player formerly employed in my piece, Sound of My Hands, which consists of a [groove] object inside of a [poly] object.With the sample player I layer short samples overtop of the bed tracks. When the performer presses any FSR, the [urn] object randomly selects both a sample and the transposition of that sample.
I made a github repository for the colour detection bpatcher, so anyone with Max MSP installed can download it and try it out for themselves: https://github.com/CLKo1/colourdetect-maxmsp
Design Process
For DMI designs for specific performers, or for performers with physical disabilities, or limited mobility, the concept for a DMI should always start with the performer's capabilities and gestures. For example, with TRAVIS I and II, the starting limitations are a violinist's typical playing gestures and the violin's physical geometry. But, for an instrument aimed for the capabilities of the typical human body, and not a specific type of instrumentalist, can a concept for a DMI effectively arise from the technology? This is my main research question. I made an application in Max MSP that would randomly output a list of sensors. I then considered the sensors affordances and limitations and I challenged myself to think of new instruments that had to incorporate those sensors. My favourite design from this process, the Ball, is what developed into the GLOBE. I have a github repository of the Max patch here: https://github.com/CLKo1/random-sensor-list
If you don't have Max MSP installed, I turned it into a standalone application. You can download it and try it on your own desktop here. This version is only compatible with Mac OS, not Windows.
I find that the one sensor category is the easiest to come up with ideas for and I often have multiple different DMI design ideas per sensor. I also took note which of these ideas could be made with an alternative sensor. For my best one sensor ideas, I find that I intuitively want to add another sensor or two despite the one sensor restriction. As I work through the two-sensor category, I only have one or two ideas for DMIs for each combination of sensors. When I consider the three-sensor category, there are times when I cannot think of any ideas with some combinations of sensors. Sometimes two of the sensors given are similar enough that it would be redundant to use both of them in one DMI, or it does not make sense to combine them given their limitations. Sometimes I cannot think of any ideas with the three or four provided sensors other than a DIY MIDI touchpad layout. Given these challenges, if I were to repeat this process, I would only consider lists or two random sensors, then com up with as many design ideas as possible, and finally think if there are any other sensor types that would enhance the design. Let me know how it goes if you try it out!
Audience Evaluation
After the Youtube livestream I asked the audience to fill out a voluntary survey for feedback on the GLOBE design and the overall performance. The survey questions were:
Could you see/hear the correlation between how the instrument was played and how it changed the sound? If so, was one piece easier in which to identify this correlation than the other?
How many ways of playing the instrument (instrumental technique) could you identify?
Do you have suggestions for other creative playing techniques that were not used during the performance?
Could you perceive mistakes in the performance, and if so, what were they?
What about the performance did you find MOST effective?
What about the performance did you find LEAST effective?
Any other thoughts about the performance you would like to add?
For the purposes of this study, the participants chose to either be referred to by their given name, or they provided a pseudonym for themselves. There was a total of 17 participants. There was no recognizable patterns in their answers based on gender. But there were some similarities between their answers based off of their level of expertise in both music and music technology.
1. Could you see/hear the correlation between how the instrument was played and how it changed the sound? If so, was one piece easier in which to identify this correlation than the other? Seven participants write "yes", but do not elaborate on their observations. For the more descriptive responses, the participants find there are some aspects that are easier to identify than others. Four participants indicate that it is easier to identify the correlation in Etude 1. Five participants specifically describe the FSRs in Etude 1 as easy to identify and four participants express hesitation or confusion while describing how spinning the GLOBE affects the drone. In contrast, six participants find it easier to identify the correlation in Etude 2, which goes against my expectations. Six participants write about how the colour sensor selects the sounds. Only three participants write about the other sensors in Etude 2. All three of these participants either saw or worked with new instruments before. Two participants describe uncertainty about the mappings in Etude 2.
2. How many ways of playing the instrument (instrumental technique) could you identify? I expected participants to think about and describe the gestures that they are seeing; I did not anticipate that seven participants would report a specific number. Three participants write “six”, three participants write “four”, and one wrote “three”. One participant appears to focus heavily on mappings, of which he is unsure of, instead of thinking about playing gestures. Three participants describe the types of sensors and what stimuli they measure. All three of these participants either worked with, or have seen, concerts with new instruments before. Nine participants describe how the GLOBE is interacted with as an instrument. Participants generally describe touching, rolling, spinning, and proximity measurements of the hands and only one participant, Dawn, mentions “shaking”. 3. Do you have suggestions for other creative playing techniques that were not used during the performance? Five participants do not have any suggestions, four of which are amateur musicians who have not seen performances with new instruments before. Five participants suggest a form of throwing or tossing the GLOBE. Two participants, both of which have worked with new instruments before, suggests ways of playing the GLOBE with multiple people. Other suggestions include building a track or course for the GLOBE, building a tall stand for it, rotating patterns with the LEDs, and use the GLOBE to tell a story.
4. Could you perceive mistakes in the performance, and if so, what were they? I ask this question because in Gurevich & Fyans’ (2011) study, they found that their audience understood little about the Tilt Synth, and therefore their audience could not perceive mistakes in the performance; the Tilt Synth was “error proof”. Eleven participants write that they could not perceive mistakes. Two participants feel unsure if what they identify are mistakes or not, such as alterations in melodic materials in Etude 1 or the sound being “muffled” when the hand is placed over the IR sensor in Etude 2. One participant thinks there is an intonation mistake in Etude 1, but this is impossible because in this piece I mapped the FSRs to exact pitches. There is only one participant who notices one of the mistakes that I as the performer self-identify as a true mistake: playing slightly off time during Etude 1.
5. What about the performance did you find MOST effective? Four participants write about having a preference towards Etude 2, and one participant writes that they like Etude 1 better. This difference in taste comes from participants finding Etude 2 more exciting and Etude 1 more conventional. Two participants specifically write about the slow introduction in Etude 1. Three participants write about the physical appearance of the GLOBE, or its mechanics. Five participants’ comments are general in nature and do not specify the piece.
6. What about the performance did you find LEAST effective? Four participants do not indicate any ineffectiveness in the performances. One participant found Etude 1 the least effective because it was “very static”. Whereas two other participants dislike Etude 2 because of its many layers of sounds. Three participants comment on how the colour of the LEDs was difficult to see, and two more participants also mention this in another section of the survey. Two participants do not understand some aspect of the performance, and therefore they find this lack of knowledge the least effective part of their experience. This observation further supports Tarabella and Bertini’s (2004) claim that “people want to understand the rules of the new game, beside tasting and appreciating the musical result” (p. 220). Two participants dislike the drone in Etude 1 because they feel it contrasts too much with the Karplus Strong engine. One participant also would like to see larger gestures.
7. Any other thoughts about the performance you would like to add? Twelve of the participants used the opportunity in question seven to write positive comments about how they enjoyed the performance and think the GLOBE is a fascinating DMI.
Other Observations: A high number of participants state that they find Etude 2 easier than Etude 1 to understand in terms of the correlation between playing techniques and the resultant sound; some participants are hesitant when describing the drone in Etude 1. A higher number of participants also write that Etude 2 is the most effective part of the concert. This is the opposite from my expectations because I intended Etude 1 to have more direct one-to-one mappings.
My combined observations lead me to believe that as a whole, the audience has a basic, surface level understanding of the GLOBE and they could enjoy and appreciate the performance. However, they could not gain an in depth understanding of the instrument the same way a person would understand it from playing with it. This overall experience is comparable to the audience experience of a classical music concert, where the audience are not experts in playing the particular performing instrument, but they can correctly observe how it functions and appreciate it.
While the audience did miss some small details with how some sounds are mapped and manipulated, and there are some clear aesthetic preferences that may not align with the musical presentation, there is no indication that this severely impeded their ability to engage and enjoy the performance. This, combined with the overwhelmingly positive feedback in question 7 and how four participants do not list anything in the performance as being least effective, leads me to believe that the GLOBE is an overall effective and successful DMI from the audience perspective.
Bibliography
Andresen, M., Bach, M., Kristensen, K., Nordahl, R., Serafin, S., Fontana, F., & Brewster, S. (2010). The LapSlapper - feel the beat. In Haptic and Audio Interaction Design: 5th International Workshop, HAID 2010, Copenhagen, Denmark, September 16-17, 2010. Proceedings (Vol. 6306, Lecture Notes in Computer Science, pp. 160-168). Berlin, Heidelberg: Springer Berlin Heidelberg.
Bresson, J., & Chadabe, J. (2017). Interactive Composition: new steps in computer music research, Journal of New Music Research, 46(1), 1-2. https://doi.org/10.1080/09298215.2017.1288748
Dahlstedt, P., & Dahlstedt, A. S. (2019). OtoKin: mapping for sound space exploration through dance improvisation. In Proceedings of the 2019 International Conference on New Interfaces for Musical Expression (NIME19), 156-161.
Dudas, R. (2010). "Comprovisation": the various facets of composed improvisation within interactive performance systems. MIT Press, 20, 29-31. https://www.jstor.org/stable/40926370
Einarsson, A. (2017). Experiencing Responsive Technology in a Mixed Work: interactive music as embodied and situated activity. Organised Sound, 22(3), 418–427. https://doi.org/10.1017/S1355771817000425
Freed A., Wessel, D., Zbyszynski, M., & Uitti, F. M. (2006). Augmenting the Cello. In Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06). 409-413.
Granieri, N., & Dooley, J. (2019). Reach, a keyboard-based gesture recognition system for live piano sound modulation. In Proceedings of the 2019 International Conference on New Interfaces for Musical Expression (NIME19). 375-376. http://www.open-access.bcu.ac.uk/id/eprint/7623 Gurevich, M., & Cavan Fyans, A. (2011). Digital Musical Interactions: performer–system relationships and their perception by spectators. Organised Sound, 16(2), 166–175. https://doi.org/10.1017/S1355771811000112
Jensenius, A., & Voldsund, A. (2012). The Music Ball Project: concept, design, development, performance. In Proceedings of the 2012 International Conference on New Interfaces for Musical Expression (NIME12). 300-303. http://urn.nb.no/URN:NBN:no-31755
Jordà, S., Geiger, G., Alonso, M., & Kaltenbrunner, M. (2007). The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction - TEI ’07, 139. https://doi.org/10.1145/1226969.1226998
Kimura, M. (2003). Creative Process and Performance Practice of Interactive Computer Music: a performer’s tale. Organised Sound, 8(3), 289–296. https://doi.org/10.1017/S1355771803000268
Kimura, M., Rasamimanana, N., Bevilacqua, F., Zamborlin, B., Schnell, N., & Fléty, E. (2012). Extracting Human Expression for Interactive Composition with the Augmented Violin. In Proceedings of the 2012 International Conference on New Interfaces for Musical Expression (NIME12).
Ko, C., & Oehlberg, L. (2020). Touch Responsive Augmented Violin Interface System II: Integrating sensors into a 3D printed fingerboard. In Proceedings of the 2020 International Conference on New Interfaces for Musical Expression (NIME20).
Koh, E., & Yadegari, S. (2018). Mugeetion: musical interface using facial gesture and emotion. In Proceedings of the International Computer Music Conference. 65-68. http://hdl.handle.net/2027/spo.bbp2372.2018.013
Lucas, A., Ortiz, M., & Schroeder, F. (2019) Bespoke Design for Inclusive Music: The challenges of evaluation. In Proceedings of the 2019 Conference on New Interfaces for Musical Expression (NIME19). 105-109.
Marshall, M., & Wanderley, M. (2006). Evaluation of Sensors as Input Devices for Computer Music Interfaces. In Computer Music Modeling and Retrieval: Third International Symposium, CMMR 2005, Pisa, Italy, September 26-28, 2005. Revised Papers (Vol. 3902, Lecture Notes in Computer Science, pp. 130-139). Berlin, Heidelberg: Springer Berlin Heidelberg.
Matossian, V., & Gehlhaar, R. (2015). Human Instruments: Accessible musical instruments for people with varied physical ability. In B.K. Wiederhold et al. (Eds.) Annual Review of Cybertherapy and Telemedicine. (pp. 202-207). IOS Press.
Medeiros, C., & Wanderley, M. (2014). A Comprehensive Review of Sensors and Instrumentation Methods in Devices for Musical Expression. Sensors, 14(8), 13556–13591. https://doi.org/10.3390/s140813556
Mikalauskas, C., Viczko, A., & Oehlberg, L. (2019). Beyond the Bare Stage: exploring props as potential improviser-controlled technology. Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction - TEI ’19, 35–43. https://doi.org/10.1145/3294109.3295631
Miranda, E., & Wanderley, M. (2006). New digital musical instruments: Control and interaction beyond the keyboard. (21). Middleton, Wis.: A-R Editions.
Morgan, C. (2018). A Motion-based Controller for Real-time Computer Music with applications for Dance Choreography and Music Composition: the design, construction and programming of a wireless accelerometer-based interface system. In Collin College Sabbatical Leave Archives.https://www.collin.edu/hr/benefits/sabbatical_archive.html
Murray-Browne, T., Aversano, D., Hobbes, W., Lopez, D., & Chapman, D. (2014). The Cave Of Sounds: an interactive installation exploring how we create music together. In Proceedings of the 2014 International Conference on New Interfaces for Musical Expression (NIME14), 307-310.
Neustaedter, C. & Sengers, P. (2012). Autobiographical Design in HCI Research: designing and learning through use-it-yourself. In Proceedings of the Designing Interactive Systems Conference (DIS’12). 514–523. https://doi.org/10.1145/2317956.2318034
Nyce, D. (2003). Linear Position Sensors: Theory and Application. Hoboken: John Wiley & Sons, Incorporated.
O’Modhrain, S. (2011). A Framework for the Evaluation of Digital Musical Instruments. Computer Music Journal, 35(1), 28–42. https://doi.org/10.1162/COMJ_a_00038
Overholt, D. (2011). Violin-Related HCI: A taxonomy elicited by the musical interface technology design space. In Arts and Technology: Second International Conference, ArtsIT 2011, Esbjerg, Denmark, December 10-11, 2011, Revised Selected Papers. 101 (pp. 80-89). Berlin, Heidelberg: Springer Berlin Heidelberg.
Paine, G. (2009). Towards Unified Design Guidelines for New Interfaces for Musical Expression. Organised Sound, 14(2), 142–155. https://doi.org/10.1017/S1355771809000259
Paine, G. (2004). Gesture and Musical Interaction: Interactive engagement through dynamic morphology. In Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME04), 80-86.
Rowe, R. (1999). The aesthetics of interactive music systems. Contemporary Music Review, 18(3), 83-87, doi:10.1080/07494469900640361
Rowe, R. (1993). Interactive music systems: Machine listening and composing. Cambridge, Mass.: MIT Press.
Tahiroglu, K., Gurevich, M., & Knapp, R. B. (2018). Contextualising idiomatic gestures in musical interactions with NIMEs. In Proceedings of the 2018 Conference on New Interfaces for Musical Expression (NIME18). 126-131. doi: 10.5281/zenodo.1302701
Tanaka, A. (2000). Musical performance practice on sensor-based instruments. In M. Wanderley & M. Battier (eds), trends in gestural control of music, 389-405. IRCAM - Centre Pompidou.
Tarabella, L., & Bertini, G. (2004). About the Role of Mapping in Gesture-Controlled Live Computer Music. In U. K. Wiil (Ed.), Computer Music Modeling and Retrieval. Springer Berlin Heidelberg. ( 2771)217–224. https://doi.org/10.1007/978-3-540-39900-1_19
Thorn, S. D. (2019). Transference: a hybrid computational system for improvised violin Performance. Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction, 541–546. https://doi.org/10.1145/3294109.3301254
Turchet, L. (2018). Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion - AM’18, 1–7. https://doi.org/10.1145/3243274.3243280
Visi, F., & Dahl, L. (2018). Real-Time Motion Capture Analysis and Music Interaction with the Modosc Descriptor Library. In Proceedings of the 2018 International Conference on New Interfaces for Musical Expression (NIME18), 144-147.
Wang, J., d'Alessandro, N., Fels, S., & Pritchard, R. (2011). SQUEEZY: extending a multi-touch screen with force sensing objects for controlling articulatory synthesis. In Proceedings of the 2011 International Conference on New Interfaces for Musical Expression (NIME11). 531-532.
Walsh, L., Bluff, A., & Johnston, A. (2017). Water, image, gesture and sound: Composing and performing an interactive audiovisual work. Digital Creativity, 28(3), 177–195. https://doi.org/10.1080/14626268.2017.1353524
Weinberg, G., Orth, M., & Russo, P. (2000). The Embroidered Musical Ball: a squeezable instrument for expressive performance. CHI '00 Extended Abstracts on Human Factors in Computing Systems, 283-284. https://doi.org/10.1145/633292.633457
Winkler, T. (1998). Composing interactive music: Techniques and ideas using Max. Cambridge, Mass.: MIT Press.
Xiao X., Haddad, D.D., Sanchez, T., van Troyer, A., Kleinberger, R., Webb, P., Paradiso, J., Machover, T., Ishii, H. (2016). Kinéphone: exploring the musical potential of an actuated pin-based shape display. In Proceedings of the 2016 Conference on New Interfaces for Musical Expression (NIME16). 259-264.
Zappi, V., & McPherson, A. (2014). Dimensionality and appropriation in digital musical instrument design. In Proceedings of the 2014 Conference on New Interfaces for Musical Expression (NIME14). 455-460. 10.5281/zenodo.1178993