Sonic Gestures: Investigating Joy in Physical Sound Interactions

Submitted in Partial Fulfilment of MSc Sound Design at The University of Edinburgh.



Jack Ridley

Introduction

Funny is often better than seriouswit is never a bad thing in design! It can offer humour, interest, commentary, or whimsy (an art in itself).” - Perry Cook1

My interest and inspiration

This project was initially born from an interest in physical sound interactions. I understand physical sound interactions to refer broadly to any interaction where a sonic output is produced based on input data from a physical gesture or action. In this project I will be using the term physical sound interactions to refer more specifically to physical interactions with digital sound systems. I also focus my discussion on those interactions where the physical actions themselves act as the interface between user and sound, rather than the physical action moving a visible slider for example, where there is still physical input being transferred to sonic output, but the presence of a static interface mediates the connections between action and sound. Therefore from here on unless stated otherwise, by physical sound interactions I refer to what might be more accurately described real-time physical interactions with digital sound.

My interest in this type of sonic encounter comes from personal experiences where these kinds of interactions have felt like they elicit a unique type of joy and fun. My earlier study in performance art at QMUL sparked an interest in participatory and interactive performance forms such as Punchdrunk’s immersive productions2, or Janet Cardiff’s sound walks3. This interest in interactivity was then fostered in a more technical light over the past two years as I have started working with software and hardware such as game engines (Unity, Unreal, Wwise), visual coding environments (max MSP, pure data), and microcontrollers (Arduino) which make it possible to design interactive media that can be designed, tested, and iterated upon relatively quickly using mostly just a single laptop. This project then feels like a natural progression of this personal route of artistic practice and research, investigating how I can bring these new technical skills and interests into a more immediately live and physical space that might be described as participatory live art.

My aims with this project

My aims with this research project then exist in quite a personal scope. I tried with this project to build experiments that would inform my own design process and style within the realm of physical sound interactions. My specific investigation areas were:

  • How can I design more joyful physical sound interactions?
  • How can I design more meaningful physical sound interactions?

These two questions have provided me with a lot of fuel and inspiration throughout this project, but they undeniably bring up many questions of there own with regards to how I am defining or assessing the qualities of ‘joyful’ and ‘meaningful’. As this project ultimately aims to inform my own personal style and design process, these terms within this project generally refer to joyful and meaningful to me as an artist, designer, and participant in my own right. I do at points take influence from discussions with others and note how their interpretations of experience differ from my own, but for this stage of research running any serious studies of how a large group of participants experienced the interactions was out of the scope and timeline that I was working within.

Context

The area that my project has ended up focusing on - designing physical sound interactions for hands - has a significant precedent in the field of designing interfaces for digital musical and sonic tools. Archeological evidence points to the first tools made my humans being hand held. Therefore, it should be unsurprising to us that many artists and designers have started their experiments with controlling digital sounds in novel, physical ways with hand controlled devices. It’s also worth noting that most traditional musical instruments that have made it into our pop culture over the centuries traditionally feature prominent use of the hands.

Notable designers who have created and worked with hand control devices for digital sound that have been of influence during my project include: Atau Tanaka,4 Imogen Heap (and mimu),5 Douglas McCausland,6 Onyx Ashanti,7 and Michael Waisvisz.8 These artists have each been influential, largely in their broad approaches to mapping (what parameters they choose to control with what gestures), their performance styles (what actions they seem to lean into and enjoy performing), and what they describe as being the aesthetic appeal of using physical hand interfaces.

My project takes a slightly skewed approach to this world of controlling digital sound with your hands, as my project began from a specifically emotional approach of how I can gear these kinds of interactions towards joy and fun for the user. Whilst I acknowledge that this project has necessarily become quite personal and specific to my own tastes, I hope that this line of research will inform ways that I can design interactions not just to be used by myself, but also users from the general public. This distinguishes my project from many of these artists listed above, as many of them focused their designs on either personal performance practice, or creating a commercial performance tool aimed at musicians and performers.

How to read this dissertation

The ideal way to read this dissertation would be to read along, and test each interaction as the max patch is referenced using a LeapMotion camera. Of course this might not be possible, so I have tried to make the project accessible through extensive video documentation of the interactions in Appendix A.

With this project I have submitted a windows application (LeapMotionOSC.exe - Appendix D, Item 2) which connects a LeapMotion camera to OSC, and a series of Max MSP Patches that contain the sound interactions that I designed for my project. These pieces of software will allow you to play and experience the sound interactions if you have access to a LeapMotion camera. The max patches also contain further notes on the more technical details of each interaction that can be seen in the visual coding and comments.

(For info on the technical aspects of how I connected the LeapMotion to max MSP, see Appendix E, Item 16)

During each chapter I recommend that the reader stops and watches each video from the appendix as I reference it. I also recommend that the reader looks at the max patches as I reference them, as inside the max patches are comments that will enlighten the reader on some of the more technical aspects of the interaction.

Due to limitations with how long this report is permitted to be, I have had to banish many pieces of information to the Appendices. As such, I do strongly recommend that if you as a reader find the main writing passing over technical elements too quickly then please have a look in the Appendices as they will often contain more relevant video documentation and blog posts that will provide more insight into the smaller details of what I did within this project. I will name specific Appendix sections at the end of each chapter subsection that contain relevant information.

Note on my approach to the project structure

From the beginning of my project the experience of joy during physical sound interactions has been at the forefront of my interests in this area. The project has been built as a way to find excuses to chase the experience of joy through designing interactions with sound.

This has proved an interesting chase, with various stages of: trying to identify what I mean by joy, analysing what aspects of design foster this kind of experience, experimenting with implementing these elements in my own designs, and finally reflecting on the successes and failures of these experiments.

This format suggested to me an experimental method that would provide an inner structure to the project:

  • Theory that suggests an aspect of Joy within interactions
  • Experiment(s) that create sound interactions designed to incorporate this aspect of Joy
  • The results/reflections of the experiment: success or failure, and how it will likely influence my future designs

I also like this experimental method personally from an organisational standpoint as it makes the project in some way more manageable for future reference: individual experiments within the project can be more easily identified and referred to.






Chapter 1: Joy Through the Intuitive

Theory - Don Norman’s Psychology of Design

My initial conception of joy involved some sense of ease of use - I was effectively defining joy as the absence of the negative feelings like frustration or anger that I thought prevent an experience from being joyful. This research led me to Don Norman’s principles of design. Norman writes on the psychology of designing for human beings in his well known book The Design of Everyday Things.9 Norman describes how people encounter two things when they use a new object:

  • The Gulf of Execution, where they try to figure out how [the thing] operates”10
  • The Gulf of Evaluation, where they try to figure out what happened”11

Norman states that the job of a designer is “to help people bridge the two gulfs.” Failing to do so might result in a person “blaming themselves” for failure to use the device, and possibly giving up on the interaction all together.12 Whilst not overtly linked to joy here, these ideas felt close to my initial ideas of ease of use that I personally linked to joy.

“Both execution and evaluation can affect our emotional state.” 13

In this discourse, execution and evaluation are driven by goals. For Norman, the whole interaction starts with a goal of some sort, then follows the execution trying to achieve this, and then the evaluation to determine whether the goal has been achieved. Crucially the evaluation stage involves the confirmation or investigation of why the execution succeeded or failed. The understanding of how an object or device works based on this evaluation is what Norman calls a conceptual model. In order to build this conceptual model, the device must provide feedback - information which during the evaluations stage will inform the person what has happened and why.

I’ve therefore extrapolated from these principles a personal design framework that I used to investigate this potential for joy in interaction. The framework prioritises these things:

  • provide the user with a good conceptual model of the experience
  • foster a goal setting environment within the interaction

These next three experiments demonstrate a few attempts at using this ease of use framework to investigate joy in physical sound interactions.


Experiment 1.1 - Dial Music

Experimental Method

This experiment existed as two prototypes. The first prototype I refer to as ‘Pinch Sine Waves’. It can be seen/heard here:

Appendix A - Video 2.2: Pinch Sine Waves Prototype 01

This prototype focused on making the conceptual model for the interaction as simple as possible. I used simple 1:1 Mappings for the sound parameters.

I also hoped that the similarity of the rotation motion to the real-life physical action of turning a dial to change a parameter would make the whole action feel more intuitive.

To add some extra sonic interest to the interaction I purposefully chose frequency ranges for each oscillator that would result in audible beating frequencies between the two oscillators when played simultaneously.

The second prototype I refer to as ‘Dial Music’. It can be seen/heard here:

Appendix A - Video 2.3: Pinch Sine Waves Prototype 02 - ‘Dial Music’

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 1)

In this prototype I tried to expand the possibilities for the interaction by introducing musical scale through discrete mapping.

I hoped here that the pre-associations that a user might have with how to play a musical instrument might make the interaction more intuitive by leading them to explore familiar concepts such as melody or harmony.

By adding these musical possibilities, I was trying to introduce more opportunities for the user to set themselves goals that they could then achieve.

(see Appendix E: Item 3 for more technical details of this experiment)

Results/Evaluation

I really enjoyed the tactility of the pinch action in these prototypes - for me, the sense of physicality that this gives the interaction was really satisfying. The fact that the volume of the sound was loudest and most steady only when my fingers were touching each other felt like an apt description of the way a sound fades in and becomes more physically present as it’s volume increases.

I liked how simple the first prototype was, the fact that it didn’t have too much possible range meant that the single sonic experience of beating frequencies could be really explored. However, the lack of variety of sounds possible did limit the amount of time I would want to play with this interaction.

The second prototype definitely felt like it offered longer interaction possibilities as I wanted to spend more time trying to make music with it. It definitely did feel good and satisfying when I managed to (very occasionally) successfully execute an intended melodic phrase.

However, I think the second prototype lost a certain pleasing aesthetic that the first prototype had. The first prototype, while limited felt like a single action that could be explored and made sense as a whole. Meanwhile, the second prototype felt like a slightly more arbitrary copy of a musical instrument. For example, the frequency mappings felt somewhat unnatural in the way that they had been clamped into C Major, making me feel more aware that the interaction was being forced into the form of a musical device in a way that might be against it’s nature. For another example, whilst the height being connected to octaves makes sense in terms of people associating higher frequencies with higher positions, the fact that a certain distance in the y-axis suddenly caused an octave leap felt rather arbitrary. This lack of authenticity in the interaction made it less enjoyable for me.


Experiment 1.2 - Sampler

Experimental Method

For this experiment I made a patch that I refer to as ‘Sampler’. It can be seen/heard here:

Appendix A - Video 8.1: Sampler Demo

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 8)

With this experiment I again tried to make an interaction that would make it as easy as possible for the user to create a good conceptual model. To achieve this I kept the necessary actions for control as simple as possible, using the one consistent ‘mechanic’ of a velocity based threshold trigger for all aspects of the interaction.

I also hoped that the conceptual model would be more easily established because moving your hand fast is very similar to the action of hitting a drum. By using drum samples in the interaction, I hoped to encourage users to quickly make this association.

I worried that the difficulty of breaking the velocity threshold exactly on time in a musical rhythm might make users frustrated and obstruct a joyful time, and so I implemented a ‘quantised mode’ where the samples would be forced into a metronome grid, so that they might sound more musically ‘in-time’ with each other.

(see Appendix E: Item 4 for more technical details of this experiment)

Results/Evaluation

There was definite joy for me within this interaction. I think the consistency created by having only one mechanic that is used throughout the interaction (moving the hand fast to trigger a sample) makes it a more relaxed, playful, and joyful experience.

I also think that a large part of the joy comes from the surprise of hearing the different samples as you move your hand around. Even though it is quick to establish that you are playing with drums, the fact that no visual interface indicates which sound is attached to which direction means that you still occasionally get surprised at how big or small the ‘drum’ you just ‘hit’ is. This lack of detailed information also perhaps makes the experience feel more playful and joyful because it seems to encourage more random exploration rather than calculated performance.

The quantise mode generally made the experience less joyful it seems. When I presented this interaction to a few friends, they all preferred the non-quantised mode because they wanted the samples to feel as responsive as possible. I got some joy out of hearing how organised the sound output was in quantise mode, even when I was waving my hands around quite randomly. However, this mode was less joyful for me too because whenever I played trying to only trigger a few samples, it just felt like it was making the experience less responsive.


Experiment 1.3 - Piano Clouds

Experimental Method

This experiment involves an interaction I created that I call ‘Piano Clouds’. It can be seen/heard here:

Appendix A - Video 12.1: Piano Clouds Demo

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 11)

In this interaction I tried to maintain intuitiveness by keeping the controls very simple. These limited controls were consciously chosen as I felt like they would give the greatest sense of expressive control with the least complex input.

I hoped that the simplicity of the controls would make it intuitive and fun to use, and that the expressive control of moving between related chords would allow musical goals to be set and achieved by the user.

(see Appendix E: Item 5 for more technical details of this experiment)

Results/Evaluation

I enjoyed this interaction - for me I think the simplicity of the controls allows for more joyful use because there isn’t any stress involved in trying to operate complex controls.

I did feel that it was slightly limited through the simple controls, but aesthetically that limitation felt appropriate with the sound: using piano notes feels like an appropriate simple/ubiquitous sound output, and the way that they cluster together forming a kind of musical ambience suggested a kind of slow experimental interaction rather than one that was to be mastered by some complex skill or technique.

I would be interested to develop this patch with more capability to change the length of the piano samples triggered, to allow for slightly more variation in sound.

Play Testing Documentation14

This was an experiment that I managed to user test with two participants. Here are some quotes they gave during our discussions that I found particularly insightful:

“it felt nice that I didn’t have to think as much with it, I can kind of just loose myself in the tool rather than think about what I’m doing with the tool” - Lyla

“it felt creative in that I could change how it made me feel, with the chords and the speeds. But other than that it felt meditative because it was so simple” - Lyla

“if you move too fast then it sounds like you’re hitting the same note like three times and that impact, I just found it a little bit annoying because - one, it’s not natural sounding to what an actual piano would sound like, two, the same note being hit multiple times when it’s not in character with the rest of what’s going on… I feel like it’s meant to be more of a peaceful experience… but those moments would catch me off guard” - Lyla

“I’d say that was very relaxing… it’s kind of meditative in a way” - Andrew

“even though it’s very fun to play with, I don’t think of it as playful. It’s more of like an experience, of being able to waft your hands through the notes of the piano randomly” - Andrew

“I think [not having complete control over how many notes were played] was very fun” - Andrew

(more extensive lists of written quotes taken from these discussions can be found in *Appendix F: Item 1)

User play testing videos of this interaction can be seen/heard here:

Appendix A - Video 12.3: Piano Clouds Play Test - Andrew

Appendix A - Video 12.4: Piano Clouds Play Test - Lyla

The full audio interviews with Andrew and Lyla just after they had each experienced this interaction can be heard here:

Appendix B - Audio 1.2: Andrew Play Test Interview - Piano Clouds

Appendix B - Audio 2.2: Lyla Play Test Interview - Piano Clouds






Chapter 2: Joy Through Complexity and Challenge

Theory - Complexity through Interface and Conceptualisation

My second approach to facilitating joy through physical sound interactions came as a response to my first. Once I had the idea of joy through ease of use in my head, I started to notice more and more contrasting discussion in the areas of design and play about the importance of challenge.

The idea of challenge and complexity leading to joy and play continues with the conceptualisation of an interaction as a series of attempts on a goal as discussed in chapter 1. This approach is not concerned about making the achievement of set goals as painless as possible, instead it ensures that when one goal is achieved, it is possible for the user to set themselves a new more challenging goal. Ideally, this increase in challenge could be repeated indefinitely. Thus, the interaction remains engaging through the increasing complexity and/or concentration required of the user, even as they become familiar with the experience.

This approach is maybe most clearly explored through social scientist Mihaly Csikszentmihalyi’s concept of the ‘flow’ state. As described by Norman:

“One important emotional state is the one that accompanies complete immersion into an activity, a state that the social scientist Mihaly Csikszentmihalyi has labelled ‘flow’… When in the flow state, people lose track of time and the outside environment. They are at one with the task they are performing. The task, moreover, is at just the proper level of difficulty: difficult enough to provide a challenge and require continued attention, but not so difficult that it invokes frustration and anxiety.15

The above description of the flow state speaks strongly to my personal ideas of a joyful interaction. Similar ideas are described by designer Cas Holman, who designs objects to facilitate play. Holman’s company, Heroes Will Rise follow the motto:

“‘Easy is boring.’ ‘Easy’ meaning something that doesn’t engage your thinking.” 16

The idea that a complex interface can be joyful has been specifically noted within the field of digital music control by Hunt & Kirk in their 2000 study on parameter mapping for musical performance. In their analysis of taped user interviews they make the conclusion:

“Multiparametric interface is fun.” 17

The interface in question being one where more complex mappings meant that the sound controls were less obvious, and the volume control required constant energy from the user. In the interviews, participants commented:

[referring to the multiparametric interface] “‘This is really good fun! Even when you’re not doing so well!’ (Tony, Session 2).” 18

“‘One movement controlling several things is more fun. It’s not like a task - it’s like playing an instrument’ (Gregor, Session 1).” 19

[referring to the multiparametric interface] “‘You’re not so worried about getting it right. You can just say ah well if it’s a bit off and then adjust it. It’s less technical’ (Mark, Session 2).” 20

These comments support this idea that somehow less simple or precise control over the parameters seems to shift the experience from one that is more technical and task based, to one that is more playful and fun.

Holman interestingly expands this idea of complexity beyond just the interface to the ways that an object can be used. She describes this as the difference between a non-open-ended ‘toy’ and an open-ended ‘toy’. In this dichotomy, the non-open-ended object could be seen as simple because it can be used in a fewer ways (i.e. a toy car’s use will more likely be used in ways deemed ‘appropriate’ for a car), whilst the open-ended object takes on a complexity in the multitude of ways it could be interpreted (i.e. clay, which when played with can be adapted into a whole range of desired objects). Tovah Klein, Director at Barnard Centre For Toddler Development talks about this open-endedness in a recent documentary on Holman’s work:

“Lots of toys are very goal oriented. They look like something, there’s one way to play with it. So, you quickly find out it can go this way or that way and then you’re done. So it sort of shuts down that building on their own, pleasure and engagement and enjoyment.” 21

From these ideas around the merits of complexity and challenge I derived the following design principles to investigate in my own interactions:

  • avoid simple parameter controls in favour of complex, less obvious mappings
  • design interactions whose functions are more open to interpretation: an open-ended experience

These next three experiments demonstrate a few attempts at using this complexity and challenge framework to investigate joy in physical sound interactions.


Experiment 2.1 - ‘FX Control via Machine Learning Mappings’

Experimental Method

For this experiment I made a patch that I refer to as ‘Wekinator FX Controller’. It can be seen/heard in a couple of examples here:

Appendix A - Video 3.1: Wekinator with Voice Sample

Appendix A - Video 3.2: Wekinator with Wooden Recorder Sample

(you can also have a look at the max patch in Appendix C: Patch 322 and the Wekinator project file at Appendix D: Item 6)

With this interaction I tried to leverage the possibilities of machine learning with Rebecca Fiebrink’s Wekinator application23 in order to create more complex parameter mappings for the interaction.

I wanted to focus on the mappings rather than designing the effects themselves, so I decided to use MIDI to control an existing plugin. I chose the Valhalla Frequency Echo plugin24 as it has multiple parameters which each have the ability to affect the sound output in relatively extreme ways, relatively independent of each other. I also thought these specific controls (delay length, shift amount, and feedback amount) would be interesting as they have similar affects on the pitch of the output sound, but through different techniques - I hoped this would further confuse the user’s conceptual model of what they were actually controlling in this interaction.

I used Wekinator to create a complex non-linear mapping between the X, Y, and Z position of the left hand, and the three chosen parameters of the plugin. I used looped audio samples of my voice and a wooden recorder as the base for this interaction.

I hoped that the complex non-linear mappings of this interaction, coupled with the complex extreme variation in the audio output would result in a fun multiparametric interface.

(see Appendix E: Item 6 for more technical details of this experiment)

Results/Evaluation

I found this interaction joyful in how silly and ridiculous it felt. The parameter mapping was so mysterious even to me as the designer that it really encouraged a curious exploration of the interaction that almost always resulted in a surprising output. This surprise was really joyful.25

I think it’s also worth noting that on reviewing the video documentation of these interactions, this is the one that definitely resulted in the most visible enjoyment in my physicality as I played with it.

What stopped me from developing this interaction further was that it felt slightly devoid of aesthetic meaning which left me slightly underwhelmed. The output was surprising and playful, but the experience as a whole lacked a sense of meaning. It felt like the be all and end all of the interaction was to hear a sound warped in a chaotic way, rather than having an overall aesthetic sensibility such as playing music or creating sci-fi sounds or trying to make sad/happy/angry sounds.

I would be interested in developing this interaction further in the future as something that could work with real-time audio so that you could for example use it to warp your voice or a musical performance from another performer in real-time. I think this performative capacity could be quite interesting and fun.


Experiment 2.2 - ‘Formant Filters’

Experimental Method

This experiment exists as two prototypes. The first prototype can be seen/heard here:

Appendix A - Video 5.1: Formant Filters Prototype #01

The second prototype can be seen/heard here:

Appendix A - Video 5.2: Formant Filters Prototype #02

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 9)

In this interaction I used formant filters to create human speech-like synthesised sounds. I thought that speech-like sound might encourage user engagement through the perceived potential for expression (as human speech is inherently very expressive). I also hoped that because speech is something we are very used to forming subconsciously with our mouths, the act of controlling it consciously with our hands would be a fun challenge that subverts a familiar sonic action.

During development I spent a lot of time trying and failing to get the interaction to be capable of forming varied enough phonemes to form words. Over time however I thought that this limitation of the system not being able to approach speech in the way we normally would could be a fun element of challenge where the goal of comprehensible speech lies forever just out of reach.

The first prototype was an attempt to make the general interaction as simple as possible, focusing on the complexity of the sonic output and the vowel system as a control method instead.

The second prototype was an experiment in designing a complicated control interface that used a somewhat unusual combination of fast movement triggers, hand position, and hand shape. I thought perhaps using a range of different controls would make for a satisfying experience.

I was also trying to use the envelope system to add a sense of changing inflection to further create the sense of possible meaningful expression lying just out of reach of the user.

(see Appendix E: Item 7 for more technical details of this experiment)

Results/Evaluation

I shared the first prototype of this interaction during the Future Flavours of Sound Festival 2022, and got some positive feedback from the panellists who really enjoyed the vowel sounds as a playful output:

“…the vowel thing, that was fucking awesome… that was something I had not expected at all.” - Yann Seznec26

This definitely suggests that there is joyful potential with this interaction. Specifically I think the surprise expressed here speaks to the joy in controlling something like audible speech with your hands instead of your mouth. This subversion of expectation seems really enjoyable for people.

The second prototype however, feels less joyful. In the play testing feedback below, users said that they “like and dislike how difficult it is.” This suggests potential for joy in the challenge, but judging by the fact that of the 5 interactions tested by these participants, for both users this one was one of the ones they spent the least time with, it seems that in it’s current state the complexity of this interaction makes it more frustrating.

I also personally feel that there is slightly too much chaos in the latest prototype of this interaction and that maybe something was lost when actions such as the velocity based trigger were added in. This trigger system whilst fun in some scenarios because of the energy required, feels so aesthetically dissonant from how you would expect to be making the sounds that you’re hearing that I think it makes the experience much more confused and less fun.

Play Testing Documentation27

This was an experiment that I managed to user test with two participants. Here are some quotes they gave during our discussions that I found particularly insightful:

“Cacophonous” - Andrew

“It takes a lot of effort” - Andrew

“I really like and dislike how difficult it is to control the vowels” - Andrew

“like [your hands] are going against each other rather than working together” - Andrew

“I found this a little bit more confusing compared to the other ones because it took a lot more getting used to the controls to be able to make specific sounds.” - Lyla

“it feels more complicated compared to the other ones… not because of what’s there but because of what I’m having to do with my hands” - Lyla

“the association I had with [the sound] felt more personal because it’s closer to a human voice, so in that sense the tool felt easier for me to understand in terms of like conceptually what it is but in terms of like how to control it, it took more effort.” - Lyla

(more extensive lists of written quotes taken from these discussions can be found in Appendix F: Item 1)

User play testing videos of this interaction can be seen/heard here:

Appendix A - Video 5.3: Formant Filter Play Test - Andrew

Appendix A - Video 5.4: Formant Filter Play Test - Lyla

The full audio interviews with Andrew and Lyla just after they had each experienced this interaction can be heard here:

Appendix B - Audio 1.3: Andrew Play Test Interview - Formant Filters

Appendix B - Audio 2.3: Lyla Play Test Interview - Formant Filters


Experiment 2.3 - ‘Concatenative Corpus Explorer’

Experimental Method

For this experiment I developed a patch I refer to as ‘Concatenative Corpus Explorer’ which can be seen/heard here:

Video 6.3: Concatenative Corpus Explorer Prototype #02

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 5)

This patch was largely adapted from James Bradbury’s 2D Corpus Explorer28, made with the FluCoMa toolkit.

For this experiment I wanted to explore more ways to create a complex non-linear sonic output. I was particularly drawn to the concatenative possibilities of the FluCoMa toolkit because of the quick customisation possibilities of changing the target corpus in order to completely change the sound of the interaction. This variety felt like it spoke to the open-endedness I was aiming for in these experiments. In the current patch I include 3 corpuses that have contrasting sonic textures to try to achieve this open-endedness.

I hoped that this interaction would be joyful in how complex the output sound is. Even though the controls are relatively simple action wise, the movement through the corpus results in such unpredictable changes in pitch, loudness, and timbre that it is still difficult to control the sound output in an intentional way. I hoped that the complex sonic output would give the illusion of expressive control even when used naively, and would therefore encourage users to continue using the interaction to try to gain even more control over the chaotic sound.

(see Appendix E: Item 8 for more technical details of this experiment)

Results/Evaluation

I personally think this is one of my two favourites of all of the interactions. I think for me the joy comes from the sensation of controlling such a complex sound. Even though I don’t need to make complex intricate motions with my hands to create the sound, I found that the complexity of the sound made me want to move in a way that sort of mimicked the sound movement which then made the whole experience feel very satisfying.

I particularly felt that the delay line control made a big difference, as the slight pitch shift it caused when fluttering the left hand fingers gave the illusion of more control than you actually have because the sound responds to movements as intricate as finger movements.

I also personally really enjoyed the volume being controlled by the fist opening and closing, as this made it feel more possible to create varying phrases with the sound. The detail of the fist opening meant that volume could still be controlled quite carefully, but also could be sustained or stopped at any time very easily. I think this was especially possible because the right hand had only the two simple controls, and so didn’t need to do any other complex movements that would interfere with opening and closing the fist.

This interaction also resulted in some really positive responses from users who tested it, as is recorded below. Users in particular seemed to really like the challenge of trying to control the complex sound, and the responsive simplicity of the low-pass filter.

Play Testing Documentation29

This was an experiment that I managed to user test with two participants. Here are some quotes they gave during our discussions that I found particularly insightful:

“I think I just made the coolest piece… this one might actually be my favourite so far out of all of them…” - Andrew

“what’s nice about it is how it’s limiting, you can’t just [go to the point you want]” - Andrew

“it’s slightly annoying that you can’t just pick exactly where it is [the position that determines the playback slice of audio] but also that’s like kind of one of my favourite parts about it” - Andrew

“it feels less like an audio tool… I would say this one feels more playful” - Andrew

“I’m a very big fan… I really enjoyed it” - Lyla

“I love the filter… it was very consistent throughout all of the settings and… the freedom that I had and the sensitivity, it works well with the sensor - I felt like it was easier to control in this one” - Lyla

“it felt more like I was playing with it this time compared to the other ones” - Lyla

(more extensive lists of written quotes taken from these discussions can be found in Appendix F: Item 1)

User play testing videos of this interaction can be seen/heard here:

Appendix A - Video 6.4: Concatenative Play Test - Andrew

Appendix A - Video 6.5: Concatenative Play Test - Lyla

The full audio interviews with Andrew and Lyla just after they had each experienced this interaction can be heard here:

Appendix B - Audio 1.4: Andrew Play Test Interview - Concatenative

Appendix B - Audio 2.4: Lyla Play Test Interview - Concatenative






Chapter 3: Joy Through the Embodied

Theory - Embodied Sonic Interactions and Instruments

My third approach to joy within physical sound interactions focuses on the concept of embodied interactions. Atau Tanaka uses the term “Embodied Sonic Interaction”30 in a talk by the same name at The New School in 2013, in which he brings attention to the definition of embody as a verb:

“To give tangible, bodily, or concrete form to (an abstract concept)” or “To collect or unite in a comprehensive whole, system, etc.” 31

Tanaka notes that this ‘abstract concept’ might be an idea, quality, or feeling. The sense of uniting into a single whole speaks to me of the ideal that the act design aims for.

These ideas of embodied interactions build upon research by Fishkin et al. on what they refer to as “Embodied User Interfaces,” which require that “…the manipulation [input] and the virtual representation [output] are integrated within the same object,”32 (the object here being the interface). Fishkin et al. describe these embodied user interfaces as being “…on an evolutionary path towards an ideal of the invisible user interface.”33

The close-ness described by this concept of an embodied interface or interaction suggests joy to me. It makes me think of interactions in everyday life where the object or interface used feels so connected to the task that it fulfils that we hardly notice it (for example a kettle or a kitchen tap).

Ge Wang addresses this concept of embodiment in his ‘Artful Design’:

“We humans are embodied creatures; we operate more efficiently, satisfyingly when we ‘feel as one’ with the interface we are using! Similar to using our hands, an embodied interface allows us to think less about how to control it and more about what we’d want to do with it..” 34

“…the most effective and elegant interactions are the result of interfaces that mediate and seamlessly bind the user and artefact into a single system.” 35

I find Wang’s description of the merits of this embodiment really inspiring. The interactions he describes feel as if they offer a freedom of expression and control that is truly ideal: more organic and biological than the logical interactions we might expect to have with a computer. These kinds of interactions where the technology aims to “extend us – our bodies and even our intentions…”36 feel truly joyful to me. Because these embodied interactions focus on extending us as a user, the impressive outputs possible from the interaction become more personally internalised. Instead of thinking ‘Wow! How awesome that this synth can make that sound!’, we might think ’Wow! How awesome that I can make that sound!’. To me, this distinction is hugely significant and brings with it a lot joy and fun.

This joy through the personal is something that was also brought to my attention during discussion that I had with Sandra Pauletto during the ‘Sound Design(ed) Futures’ Conference, hosted by the Université Gustave Eiffel.37 I asked the speakers about their thoughts on ‘fun’ within sonic interactions in relation to their own work with physical sound interactions, and Pauletto spoke specifically on how personal experience was something that she actively thought about when designing enjoyable interactions for users. Personal experience, Pauletto posited, allows for a wide range of users who will likely have varying tastes and preferences to bring in their own desires into an interaction.38

From these concepts of embodied interactions, I derived the following design principles to investigate in my own interactions:

  • design for an aesthetic coherence between the input and output
  • make the interface feel invisible
  • conceptualise the interaction as an extension of the body/hands rather than a separate entity

These next three experiments demonstrate a few attempts at using this embodied framework to investigate joy in physical sound interactions.


Experiment 3.1 - ‘FM Velocity Synth’

Experimental Method

For this interaction, I designed an interaction that I refer to as ‘FM Velocity Synth’. This interaction can be seen/heard here:

Appendix A - Video 4.7: FM Synth Prototype #03

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 2)

In this interaction I tried to focus on making the sound output as descriptive of the input action as possible. I was imagining what I would want it to sound like if I was creating a sound design to describe the action of moving my hand. By focusing in this way I was hoping to make the interface as invisible as possible, where moving the hands didn’t feel like an act of changing parameters, but instead as you move the hand the parameters change such that the sound feels like it accompanies/underscores the action appropriately.

One of the main ways that I tried to achieve this was by mapping the volume of the synth to the velocity of the hand. By doing this, I aimed to connect the energy of the input as closely as possible to the energy of the output. In my mind this matching would give not only aesthetic coherence to the interaction (both input and output behave similarly) but also it would perhaps suggest a more invisible interface through the apparent conservation of energy. By conservation of energy I refer to the physical law that the energy put into a system must always equal the energy expelled out of it. This universal truth is usually pretty effectively governed by the laws of physics, however in the world of digital musical interfaces it is easy to bypass this rule as energy provided by the technology can be used to create output even if there is ‘no’ input present (for example in my experiments 2.1 or prototype 1 of experiment 2.2, where volume is constant throughout the interaction even if the hand is still or not present). Therefore, I hoped that by making this interaction (apparently) ‘obey’ this law of physics, it would be perceived more as a ‘real’ interaction rather than a digitally mediated interaction only made possible by intervening technology.

I also aimed to create the sense of the interaction extending the body by trying to dictate my design through physical logic such as:

  • each hand is distinct, so it should control a distinct synth voice
  • both hands are physically similar so they should have a similar ‘hand’ sound but with small variations to distinguish one from the other
  • an open hand feels different to a closed fist, so it should have a different sound and should sound ‘muffled’ as if the sound coming from the hand is being blocked by it being closed

(see Appendix E: Item 9 for more technical details of this experiment)

Results/Evaluation

I shared a video of this interaction during the Future Flavours of Sound Festival 2022, and got some positive feedback from the panellist Yann Seznec who really enjoyed the control of the volume and filtering:

“oh man, it’s really nice that it’s like, I guess, velocity based? …it’s amazing how simple that feels, and yet it’s very effective because it means you can hold your hands still and there’s no sound and that’s a very strong sonic interaction kind of thing was actually managing silence. Because managing silence in anything that involves movement is very hard - especially anything that involves direct mapping of movement… so that I thought was really effective.” - Yann Seznec39

“Leaning into [fist opening and closing] was a good choice because from a sound design perspective and a sound interaction perspective, closing your hand for me definitely has that kind of filtering feel… it works - that connection is very strong… you don’t even notice it happening because it’s such a strong interaction.” - Yann Seznec40

Personally I was really happy with this interaction from a technical standpoint, but I wouldn’t describe it as joyful for myself. It feels satisfying in how closely it describes my actions, but I think it is also limited by this close description. In my mind, the interaction translates the hands actions in a satisfying way, but doesn’t extend them enough to be fun in a way that would make me want to spend a long time with this interaction.

I do feel that the volume control with velocity is very satisfying, and for me really successfully gives the impression of an ‘invisible interface’. I would be interested in taking this aspect of the interaction and experimenting with it further on different sounds.


Experiment 3.2 - ‘Scrubber Explorer’

Experimental Method

For this interaction, I designed an interaction that I refer to as ‘Scrubber Explorer’. This interaction can be seen/heard here:

Appendix A - Video 10.2: Scrubber Explorer Prototype #02

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 7)

This experiment came about somewhat unexpectedly when I was playing with an earlier interaction that I designed to emulate record scratching, which you can see/hear here:

Appendix A - Video 9.2: Record Scratching Simulator

This interaction mimicked the action of stopping a record on a turntable and scratching it back and forth. However, when I was playing with it I noticed that sometimes when moving my hand very slowly the low-pitched sound that resulted (from the audio playing back at an extremely slow playback rate) felt very descriptive of the movement of my hand.

I decided to therefore create this explorer interaction where, similar to experiment 3.1, the sound is driven by the velocity of your hand, except in this interaction it controls the sound via playback rate rather than volume. This is an interesting relationship as there is something quite ‘natural’ in the connection of hand speed to playback rate - as the playback rate is effectively the speed of the sound. I hoped that this connection would make the interaction feel like a more embodied experience.

(see Appendix E: Item 10 for more technical details of this experiment)

Results/Evaluation

As a designer, this interaction is probably my favourite of all of the ones I have made for this project. The (relative) technical simplicity of it feels incredibly satisfying, knowing that the huge range of sound quality that I am getting from the experience is largely down to a single playback rate control system. I also think aesthetically there is something very pleasing about being able to ‘explore’ a single moment in an audio buffer so physically with your hands, and in doing so hear a sound that might be familiar in a new unfamiliar way.

I do feel that the connection of my physical energy to the playback rate makes this interaction feel like an extension of my body in a very satisfying way that also makes me less aware of the interface. I do think it’s a shame that this interaction has so many interface controls that are set and changed in max MSP with the mouse, which obviously breaks this sense of there being no interface - but for me the variety that this offers is worth the possible loss of immersion.

I also really enjoy how physical the sound output feels in this interaction. Depending on the sound used, it really sounds like your hands are moving through different mediums like paper or mud. I think that the apparent physicality of this interaction makes it feel like you have a lot more control over the sound than you technically do, and this makes for a really enjoyable experience.

The users who play tested this interaction enjoyed it as well as I have recorded below. I found it particularly interesting how Lyla commented that she felt she “needed to create the sound myself,” and that Andrew felt with some sounds that he was controlling “every little particle that happens within the sound.” I think these comments support the idea that the embodied control I tried to achieve with this design gives a joyful sense of control.

Play Testing Documentation41

This was an experiment that I managed to user test with two participants. Here are some quotes they gave during our discussions that I found particularly insightful:

“I feel like you’re allowed to be really creative while you’re playing with [the interaction] because there are so many options.” - Andrew

“I really liked being able to find a specific part of the audio file that you could then control… then that being randomised I feel like keeps things really interesting and really fresh each time you start moving around because its like really playful.” - Andrew

“It definitely reminds me of like playing with records and like trying to figure how to do live scratching with records which is always super fun” - Andrew

“The singing bowl one on its own I feel like is not as fun to mess with because it’s such a constant sound… but the plastic bag and the paper tearing because they’re so like textural… it makes you feel like you’re really controlling every little particle that happens within the sound which is really fun” - Andrew

“it’s kind of like an investigation” - Lyla

“it was really cool to sometimes combine [the sounds]” - Lyla

“consistency was one thing I wasn’t like very comfortable with… but that’s what I kind of like about it too, it kind of breaks your expectations” - Lyla

“when I was doing it I didn’t feel like I was just pressing buttons to get a sound… I needed to create the sound myself so I needed to figure it out and get familiar with it” - Lyla

(more extensive lists of written quotes taken from these discussions can be found in Appendix F: Item 1)

User play testing videos of this interaction can be seen/heard here:

Appendix A - Video 10.3: Scrubber Explorer Play Test - Andrew

Appendix A - Video 10.4: Scrubber Explorer Play Test - Lyla

The full audio interviews with Andrew and Lyla just after they had each experienced this interaction can be heard here:

Appendix B - Audio 1.1: Andrew Play Test Interview - Scrubber Explorer

Appendix B - Audio 2.1: Lyla Play Test Interview - Scrubber Explorer


Experiment 3.3 - ‘Magic Spells’

Experimental Method

For this experiment I designed an interaction that I refer to as ‘Magic Spells’ which can be seen/heard here:

Video 11.3: Magic Spells Prototype #03

(you can also have a look at the max patch - or even try it out if you have a LeapMotion camera! - in Appendix C: Patch 10)

With this interaction I was aiming to use a combination of synthesis techniques from previous experiments and use them to create an interaction that was strictly representing a single fantastical action. This fantastical action being casting magic spells. This felt like a fun interaction to design for as it is a commonly understood action with some recognisable associated actions, whilst still being something completely fantastical that cannot actually be achieved without some kind of extension of our own bodies.

I conceptualised the physical action of casting a spell as first holding your hand closed in a fist to start charging up magical energy, and then opening your hand fully to release the energy as a magical spell that shoots out from your hand. This felt like it had a fairly appropriate basis in depictions of magic in pop culture, whilst also feeling clearly defined enough to be recognised by the LeapMotion.

By focusing on a single action (casting a spell), I hoped that the user would conceptualise this interaction more as an embodied experience, rather than as an abstract tool to control sound (which suggests more of a separation between the user and the sonic output).

(see Appendix E: Item 11 for more technical details of this experiment)

Results/Evaluation

I think this is a really joyful interaction. I also think that the joy I get from this interaction is really distinctively playful. Whilst some of my other favourite interactions are joyful because they feel like such a complex way of controlling sound, this one is one of my favourites because it feels so fun to pretend to be able to cast spells - and I think the interaction does a pretty good job of creating this sonic illusion.

The camera sometimes losing track of the hands does make the limitation of the interaction space quite apparent, but at worse I think this makes the interaction feel like a game that is really fun when you play successfully, and mildly infuriating when you fail - a hallmark of many games that I truly enjoy.

Even in this relatively basic state I think this interaction shows real promise for creating a really joyful embodied interaction that could speak to a fairly universal concept of playful fun where we can pretend to have magical super powers.

The play testers for this interaction also seemed to enjoy this interaction - with Andrew going so far as to call it “the most fun out of all of them”.

I definitely plan to develop this interaction further, potentially find way to further customise the spell being cast by a hand, or experimenting with different sounds to see how they change the embodied sensation of the interaction.

Play Testing Documentation42

This was an experiment that I managed to user test with two participants. Here are some quotes they gave during our discussions that I found particularly insightful:

“this one’s definitely I think the most fun out of all of them… I don’t know if it’s my favourite one but I think it’s the most playful one for sure” - Andrew

“I feel like the controls in this one are simpler than the others… it has very few things that you can tell that you’re controlling at least, regardless of how many things are going on under the surface” - Andrew

“it really feels like you have this power in your hands now through this one” - Andrew

“it’s playing around with like the anticipation of a sound or like the result that you expect” - Lyla

“I was slightly annoyed by how the sensor did not always pick up what I was doing with my hands… I also felt like I wanted to have the freedom to orchestrate the charging speed of the respective batteries, which was not available. However, it was cool to notice the change in my emotional response to these issues over time, as I was able to positively accredit the subtle frustration for making the interaction feel more game-like, eventually sparking a very different sense of joy.” - Lyla

(more extensive lists of written quotes taken from these discussions can be found in Appendix F: Item 1)

User play testing videos of this interaction can be seen/heard here:

Appendix A - Video 11.4: Magic Spells Play Test - Andrew

Appendix A - Video 11.5: Magic Spells Play Test - Lyla

The full audio interviews with Andrew and Lyla just after they had each experienced this interaction can be heard here:

Appendix B - Audio 1.5: Andrew Play Test Interview - Magic Spells

Appendix B - Audio 2.5: Lyla Play Test Interview - Magic Spells

(N.B. this audio interview with Lyla [Audio 2.5] was unfortunately cut off by a recording device malfunction - Lyla kindly wrote up some of her thoughts on this interaction which you can find fully in Appendix F, Section 1 )






Conclusion

To conclude what I’ve learnt from this project and the experiments within, I have gained insight into what makes physical sound interactions joyful to me as a designer and user. I summarise these as:

  • Responsive systems are joyful

  • Direct mapping of energy in the input to energy in the output is joyful

  • The ability to construct/sculpt phrases is joyful

  • Aesthetic connection between input and output is joyful

There were many other insights into physical sound interactions that I gained, as documented in the experimental evaluations in this write up, but for me these points above stood out as pivotal ideas that persisted in my mind throughout these different experiments. These ideas helped me grasp the ways that I experience joy within these interactions.

Joy is undoubtedly a subjective experience that will mean different things for different people at different points in their lives, which is exactly why for me as a designer it is so useful to take this research opportunity to break it down a bit and analyse personally what joy means to me now within these interactions. Having made these conclusions, I look forward to taking these concepts into my future design work and seeing how my understanding and experience of them changes over time.

I think it was always an over-ambitious goal to capture the essence of joy in a way that could be easily identified and ‘bottled’ for easy access and liberal future application. However, this project has been an excellent exercise in taking an emotional and experiential end goal, exploring and experimenting the many ways in which I might achieve it technically and aesthetically, and evaluating why they do and don’t work (for me and possibly for others).

The Future of this Project

This project has happily presented many new questions and paths along the way for me that have either been beyond the scope of this specific project, or that I haven’t had the time to properly pursue and integrate. In the future I hope to develop this project into a more accessible compact software that I can share online, conduct larger participant studies for feedback, and go further in depth into the possibilities of using these interactions in my own artistic practice in either a live performance or installation context.

For further details on these conclusions, my role within the project, and the failures and future directions of this project see Appendix E Items 12, 13, 14 and 15.






Bibliography

Abstract: The Art of Design. ‘Cas Holman: Design For Play’. Publikro London, RadicalMedia, Tremolo Productions, 21 January 2017.

Ashanti, Onyx. ‘Onyx Ashanti | Speaker | TED’. Accessed 22 August 2022. https://www.ted.com/speakers/onyx_ashanti.

Bradbury, James. ‘Learn FluCoMa: 2D Corpus Explorer’. Accessed 1 August 2022. https://learn.flucoma.org/learn/2d-corpus-explorer/.

Cardiff, Janet. ‘The Missing Voice (Case Study B) | Artangel’. Accessed 22 August 2022. https://www.artangel.org.uk/project/the-missing-voice-case-study-b/.

Collins English Dictionary – Complete and Unabridged. 12th Edition., 2014. https://www.thefreedictionary.com/embody.

Fiebrink, Rebecca. ‘Wekinator | Software for Real-Time, Interactive Machine Learning’. Accessed 22 August 2022. http://www.wekinator.org/.

Fishkin et al. ‘Embodied User Interfaces: Towards Invisible User Interfaces | SpringerLink’. Accessed 8 August 2022. https://link.springer.com/chapter/10.1007/978-0-387-35349-4_1.

Future Flavours of Sound Festival 2022 - Audio Programming and Technologies, 2022. https://www.youtube.com/watch?v=iCKRifnjdDA.

Goldsmiths, University of London. ‘Prof Atau Tanaka’. Accessed 22 August 2022. https://www.gold.ac.uk/computing/people/tanaka-atau/.

Hunt, Andy, and Ross Kirk. ‘Mapping Strategies for Musical Performance’, 2000, 28.

McCausland, Douglas. ‘Douglas McCausland // Official Website’. Accessed 22 August 2022. https://www.douglas-mccausland.net.

‘Michel Waisvisz – DIGITAL ART (1960-2000)’. Accessed 22 August 2022. https://www.digitalcanon.nl/?artworks=michel-waisvisz.

‘MiMU | Home’. Accessed 22 August 2022. https://mimugloves.com/.

Norman, Donald A. The Design of Everyday Things / Donald A. Norman. Revised and Expanded edition. Cambridge, Massachusetts: The MIT Press, 2013.

‘Punchdrunk “The World’s Leading Immersive Theatre Company” – GQ Magazine’. Accessed 22 August 2022. https://www.punchdrunk.com/.

‘Sound Design(ed) Futures : New realities, spaces, technologies’. Accessed 27 May 2022. https://lisaa.univ-gustave-eiffel.fr/actualites/actualite/sound-designed-futures-new-realities-spaces-technologies.

Tanaka, Atau. Embodied Sonic Interaction: Gesture, Sound and the Everyday, 2013. https://www.youtube.com/watch?v=IyOUVixqmTU.

Valhalla DSP. ‘Valhalla Freq Echo: Freqency Shifter Plugin | Free Reverb Plugin’. Accessed 22 August 2022. https://valhalladsp.com/shop/delay/valhalla-freq-echo/.

Wang, Ge. Artful Design: Technology in Search of the Sublime / Written and Designed by Ge Wang. Stanford, CA: Stanford University Press, 2018.






Appendices

Appendix A: Video Documentation

I have hosted these Appendix Items on Vimeo for ease of embedding them throughout the project. However, these Appendix Items can also be found in the file directory of this project.

Section 1: Initial (Pre-LeapMotion) Tests

Video 1.1: Colour Tracking Ping Pong Ball with Trigger Grid

Video 1.2: Computer Mouse with Trigger Grid

Section 2: Pinch Sine Waves Demos

Video 2.1: First LeapMotion Sound Test

Video 2.2: Pinch Sine Waves Prototype #01

Video 2.3: Dial Music (Pinch Sine Waves Prototype #02)

Section 3: LeapMotion + Wekinator Experiments

Video 3.1: Wekinator with Voice Sample

Video 3.2: Wekinator with Wooden Recorder Sample

Section 4: FM Velocity Synth

Video 4.1: Velocity-Volume Mapping Attempt #01

Video 4.2: Velocity-Volume Mapping Attempt #02

Video 4.3: Velocity-Volume Mapping Attempt #03

Video 4.4: FM Synth Prototype #01

Video 4.5: Fist Controlled Filter Demo

Video 4.6: FM Synth Prototype #02

Video 4.7: FM Synth Prototype #03

Section 5: Formant Filters

Video 5.1: Formant Filters Prototype #01

Video 5.2: Formant Filters Prototype #02

Video 5.3: Formant Filter Play Test - Andrew

Video 5.4: Formant Filter Play Test - Lyla

Section 6: Concatenative Synthesis

Video 6.1: Concatenative Corpus Explorer Prototype #01

Video 6.2: Concatenative Interaction with Panning Demo

Video 6.3: Concatenative Corpus Explorer Prototype #02

Video 6.4: Concatenative Play Test - Andrew

Video 6.5: Concatenative Play Test - Lyla

Section 7: Theremin

Video 7.1: Theremin Prototype #01

Video 7.2: Theremin Changeable Orientation Demo

Video 7.3: Theremin Prototype #02

Video 7.4: Theremin Vibrato Options Demo

Section 8: Sampler

Video 8.1: Sampler Demo

Section 9: Scrubbing (Record Scratcher)

Video 9.1: Audio Playback Control Attempt #01

Video 9.2: Record Scratching Simulator

Video 9.3: Record Scratcher with Visual Feedback Demo

Section 10: Scrubbing (Explorer)

Video 10.1: Scrubber Explorer Prototype #01

Video 10.2: Scrubber Explorer Prototype #02

Video 10.3: Scrubber Explorer Play Test - Andrew

Video 10.4: Scrubber Explorer Play Test - Lyla

Section 11: Magic Spells

Video 11.1: Magic Spells Prototype #01

Video 11.2: Magic Spells Prototype #02

Video 11.3: Magic Spells Prototype #03

Video 11.4: Magic Spells Play Test - Andrew

Video 11.5: Magic Spells Play Test - Lyla

Section 12: Piano Clouds

Video 12.1: Piano Clouds Demo

Video 12.2: Piano Clouds Different Chord Changing Techniques Demo

Video 12.3: Piano Clouds Play Test - Andrew

Video 12.4: Piano Clouds Play Test - Lyla



Appendix B: Audio Documentation

I have hosted these Appendix Items on SoundCloud for ease of embedding them throughout the project. However, these Appendix Items can also be found in the file directory of this project.

Section 1: Andrew Play Testing Interviews

Audio 1.1: Andrew Play Test Interview - Scrubber Explorer

Audio 1.2: Andrew Play Test Interview - Piano Clouds

Audio 1.3: Andrew Play Test Interview - Formant Filters

Audio 1.4: Andrew Play Test Interview - Concatenative

Audio 1.5: Andrew Play Test Interview - Magic Spells

Section 2: Lyla Play Testing Interviews

Audio 2.1: Lyla Play Test Interview - Scrubber Explorer

Audio 2.2: Lyla Play Test Interview - Piano Clouds

Audio 2.3: Lyla Play Test Interview - Formant Filters

Audio 2.4: Lyla Play Test Interview - Concatenative

Audio 2.5: Lyla Play Test Interview - Magic Spells



Appendix C: Max Patches

These Appendix Items can be found in the file directory of this project.

Patch 0: LeapMotionUnity.maxpat

Patch 1: jr_LeapMotion_DialMusic.maxpat

Patch 2: jr_LeapMotion_FM_Prototype02.maxpat

Patch 3: jr_LeapMotion_Wekinator_MIDI_FX.maxpat

Patch 4: jr_LeapMotion_Scrubbing_RecordScratcher.maxpat

Patch 5: jr_Concatenative.maxpat

Patch 6: jr_LeapMotion_Theremin.maxpat

Patch 7: jr_LeapMotion_Scrubbing_Explorer.maxpat

Patch 8: jr_LeapMotion_Sampler.maxpat

Patch 8.1: sampler_voice.maxpat

Patch 9: jr_LeapMotion_FormantFilters.maxpat

Patch 9.1: formantFilter_voice.maxpat

(N.B. Patch 9 also requires check.json and envelopes.json to be in the same folder)

Patch 10: jr_LeapMotion_MagicSpells.maxpat

Patch 11: jr_PianoClouds.maxpat

Patch 11.1: pianoSampler_voice_fader.maxpat



Appendix D: Other Software

These Appendix Items can be found in the file directory of this project.

Item 1 - ‘jr_LMOU_readme.txt’

  • This is a readme documentation file to accompany the unity application that I built to send data from the LeapMotion to OSC ports. It details how the data can be read and approximate data ranges.

Item 2 - ‘LeapMotion to OSC Application’

  • This folder contains the ‘LeapMotionOSC.exe’ windows application of the Unity Application that I use to send data from the LeapMotion to OSC. I currently only have this built for Windows as the priority for the project was getting the interactions working on my machine rather than having them accessible on multiple operating systems for other users.

Item 3 - ‘UnityOSC-master’

  • This folder is the Unity project for the application that I made to send LeapMotion data to OSC.

Item 4 - ‘LeapMotion-PD-Game_Unity’

  • This folder is the Unity project that I started to build with the intention of using pure data to integrate some of the interactions into a standalone Unity application. It is currently not building properly but I have included it for reference.

Item 5 - ‘pure data patches’

  • These are some pure data patches that I made as ‘translations’ of some of my max MSP interactions so that I could use them in a Unity application. The pure data patches were working but the issues I was having with Unity made me decide to postpone this aspect of the project until a later date. I have included them here for reference.

Item 5.1 - ‘FM_Synth.pd’

  • This is a remake of my FM Synth interaction - approximately the same as patch 2 in Appendix C.

Item 5.2 - ‘onebang.pd’

  • This is a simple extraction I made in pd to emulate the onebang object in max MSP.

Item 5.3 - ‘onebangWithoutPreset.pd’

  • This is a simple extraction I made in pd to emulate the onebang object in max MSP (that on awake requires to be reset before passing any bang).

Item 5.4 - ‘scale.pd’

  • This is a simple extraction I made in pd to emulate the scale object in max MSP.

Item 5.5 - ‘scale~.pd’

  • This is a simple extraction I made in pd to emulate the scale~ object in max MSP.

Item 5.6 - ‘Scrubbing.pd’ This is a remake of my Record Scratching interaction - approximately the same as patch 4 in Appendix C.

Item 5.7 - ‘spigot~.pd’

  • This is a simple extraction I made in pd to allow use of the spigot function with a signal.

Item 5.8 - ‘theremin.pd’

  • This is a remake of my Theremin interaction - approximately the same as patch 6 in Appendix C.

Item 6 - ‘Wekinator_Project’

  • This is a folder which contains the ‘FiddleSticks_Wekinator.wekproj’ file which was used for Experiment 2.1 in Chapter 2 in conjunction with patch 3 in Appendix C.



Appendix E: Blog Posts

This Appendix contains selected blog posts from various points throughout the project.

Item 1: Sound Design(ed) Futures Conference (30/05/2022)

I attended the Sound Design(ed) Futures Conference on the 27/05/2022 which was held online, hosted by the Université Gustave Eiffel and organised by Eleni-Ira Panourgia and Andrea Giomi. Within this conference I was particularly interested in the talks given by Sandra Pauletto on her project focusing on sound design for energy, and Atau Tanaka on his research into using AI to train physical gesture sound interactions. After their talks, I asked the speakers how they found the design of their gestures in terms of complexity affected the experience of fun in the users. Whilst acknowledging that ‘fun’ is a subjective term, Tanaka and Pauletto both provided some interesting insight into this. Tanaka noted that fun seemed to be tied both to a level of intuitive design, and a level of playfulness and ludic design. Ludic here feels like a very interesting word to explore more - defined by the oxford dictionary as “showing spontaneous and undirected playfulness,” which feels like an interesting phrase to keep in mind whilst designing interactions. Pauletto expanded further on this challenge of designing playful interactions. She discussed that the research she is conducting involves designing for a wide range of users (as it focuses on household items sound design) which might include families and young children as well as adults. Pauletto also spoke of how when designing to encourage certain behaviour, they found that positive reinforcement was more effective than negative reinforcement, or at least was certainly more enjoyable and more likely to be adopted by a wider range of users. Therefore, some of their research into how to design the sound of household appliances to encourage better use of energy involves this kind of design that attempts to make these items fun and enjoyable to use.

Pauletto mentioned specifically that the sensation of a personalised experience was connected with an enjoyable experience. A wide range of users are always going to have particular tastes and preferences that do not align with each other - and so the more that a designer accommodates for this and allows a user to bring in their own preferences to an experience, the more likely they are to enjoy it. The example they gave was the singing shower designed by PhD candidate Yann Seznec within the context of the wider research project. The shower was activated when it detected sounds that had a sustained pitch that it recognised as music (the user singing). A demo of this can be seen here:

The Singing Shower Prototype (Yann Seznec, 2022)

This meant that even though there was a single design implemented for all users, each user could sing their own preference of song, style, volume etc. leading to a personalised user experience. This was very inspiring to me, and encouraged me to begin implementing these less prescriptive physical gesture mappings. With the velocity trigger system that I outlined in the above section, there is a similar personalised experience in that the user is able to move their hand in any direction with any kind of additional flair to achieve the same sound output. This hopefully will result in a more fun experience for the user where they can make the physical interaction more catered towards their specific preferences for what physical hand gestures they want to make the sound with.

Item 2: Play Testing Questionnaire, Setup, and Evaluation (19/08/2022)

Questions for Participants - asked verbally in conversation after each interaction

  1. What are a few words you would use to describe this experience?

  2. What do you enjoy most about this interaction, if anything?

  3. What do you find the most frustrating/annoying about this interaction, if anything?

  4. How would you categorise/conceptualise this experience? For example: Playing a musical instrument / A creative tool for making sounds / Just playing / Playing a game / An educational experience etc.

  5. Any other thoughts/associations/feelings/ideas that come to you from this experience?

I tried to avoid making these questions too leading. I focused on the broad ideas of things they enjoyed and didn’t enjoy to try to coax out ideas that might be relevant to my idea of joy without putting a word like joy directly into their minds as vocabulary.

Limitations of this Study

It is worth noting the severe limitations of this play testing study. As I note in my project reflection, my time and resource limitations during the project meant that I didn’t manage to conduct as rigorous a play test study as I would have liked. However, because of the insight that other users can provide on my own evaluations I felt that it was still worth conducting this very limited study.

It is also worth noting that the two participants represent a very small proportion of potential users for these interactions and both come with their own potential biases. In particular, the results gathered from this study may be influenced by the fact that both participants are highly trained musicians (guitar and violin), and Andrew is a sound designer who is familiar with max MSP. These factors mean that as participants Andrew and Lyla have a very specific, potentially more informed than is usual in the general public, relationship with sound, music and making sounds. These aspects also make them very interesting participants and I found their feedback incredibly insightful, but the bias is important to acknowledge.

Study Method

The study involved only two participants: Andrew and Lyla. I invited each of them into a room where the LeapMotion and laptop with the interaction was set up and then went through the following steps:

  1. Explain briefly the ‘controls’ of the interaction and the set up of their interaction being filmed and interview recorded

  2. Stand back and allow them to play with the interactions

  3. They were told to let me know when they were ready to discuss the interaction at any point, and if they didn’t stop before 10 minutes, I would stop them to initiate the interview.

  4. I asked them the questions above and recorded the verbal interviews. The audio files of these interviews were later referred to for quotes

  5. I rotated between the participants so that they had breaks between each interaction, and asked the other participant to wait in another room whilst they were not taking part so that they were less likely to be influenced by the other participant’s thoughts

Item 3: Technical Notes for Experiment 1.1

Prototype 01 aka Pinch Sine Waves

This interaction uses simple 1:1 mappings:

  • pinch amount controls oscillator volume
  • hand rotation controls oscillator frequency
  • each hand controls a distinct oscillator

Prototype 02 aka Dial Music

This interaction uses more musically focused mappings:

  • introducing discrete pitches in the frequency control so that a musical scale (C Major) could be explored
  • connecting the y-axis to frequency as well so that y-position would dictate what octave of the scale is played
  • added a delay effect that was controlled by the distance between the two hands to add more possibility for varied expression
  • pinch still controls volume

Item 3.1: Technical Note - Continuous vs. Discrete Data

This experiment made me consider the strengths and limitations of continuous data values versus discrete data values. The data coming from the LeapMotion is almost all continuous (the exception being a bool for whether the hand is present), and so in the first prototype I kept the parameters that the data was controlling continuous as well to try to make the mapping as simple (and therefore as intuitive) as possible.

The strengths of this approach were that it allowed me to make the system very responsive. Because the LeapMotion data being sent through OSC is so accurate, it needs to be smoothed before being used to control audio in order to avoid clicks and noise that didn’t fit my aesthetic aims. When keeping the output data as continuous, this smoothing is the only factor connecting the users movement input to the audio output (especially in 1:1 mapping as in this case). This means that the system can be made extremely responsive, with even small movements producing audible changes in the output. This also helps the user develop a conceptual model of the interaction, as the fast response speeds up the feedback loop of a user so that they can more quickly deduce what in their input is causing what in the output.

In the second prototype, I went for a different approach with pitch/frequency where I mapped the continuous input into a set of discrete points with smoothing set for when moving between these distinct points. The discrete points correspond to discrete musical notes in a particular musical scale. For this prototype I decided to use the scale C Major. This approach still maintains a certain sense of continuous-ness due to the smoothing in between the notes. However overall some of the sensitivity of the continuous to continuous approach is sacrificed here for an increase in reliability. By reliability here I refer to the ability to reliably recall a specific frequency. This is made more feasible with the continuous to discrete approach, as larger physical actions are required to change the frequency and it is easier to hear the distinct frequencies and identify when they have been successfully reached or not.

Item 4: Technical Notes for Experiment 1.2

Sampler

This interaction uses a velocity based threshold trigger system. When a hand moves above a certain velocity threshold, an sample is triggered. To add variety and ‘playability’ through more varied sounds, I set the interaction up such that each direction of velocity triggered it’s own dedicated sample - giving 12 total samples that could be triggered. (This was achieved by giving each velocity component a negative threshold and a positive threshold)

The quantise mode is a mode in which triggers are delayed and then sent out by a metronome on the next beat (determined by the metronomes interval). This means that the system is less responsive, but sonically sounds more musical and ‘in-time’.

Item 5: Technical Notes for Experiment 1.3

Piano Clouds

The mechanics of this interaction are:

  • moving the position of the right hand in the x-y plane triggers random piano samples (this is trigger grid system - the x and y positions are scaled to a range of 10 integers and each time an integer value changes, a trigger is sent)
  • opening and closing the left fist fades between four possible chords (all related: I, IV, V, vi of B Major) (this is achieved by having the output of all possible chords constantly playing, and then using the chord index from the hand data to control a fader system that sets which chord is audible, and fades between them as they change for smooth transitions)

For an interesting comparison of an alternative chord change system that I tried but felt didn’t offer the same expressive capabilities, see here:

Appendix A - Video 12.2: Piano Clouds Different Chord Changing Techniques Demo

I thought as well that this would be a simple interaction to use to play music casually with another ‘live’ musical instrument - with the simple triad chords being easily used as an accompaniment.

An interesting technical aspect of this patch was designing a voice stealing system that avoided clicks. The audio files are each 8 seconds long, meaning that voices are stolen very often. In order to stop this stealing causing clicks, I added a 50ms fade at the beginning of each sample trigger in the poly patch, and also made a system that checks if the triggered voice is being stolen (by checking whether the voice is busy when the trigger is sent), if it is then a 100ms fade out is triggered before triggering a sample as normal. In other interactions the latency caused by this might make the interaction feel less responsive, but due to the more random feel of the triggering system in this interaction I was able to implement it without issue.

Item 6: Technical Notes for Experiment 2.1

Wekinator MIDI Controller

I chose the Valhalla Frequency Echo plugin for this interaction as it has multiple parameters [delay length (in ms), frequency shift amount (in Hz) and feedback amount (%)] which each have the ability to affect the sound output in relatively extreme ways, relatively independent of each other. I also thought these specific controls would be interesting as they have similar affects on the pitch of the output sound, but through very different techniques - I hoped this would further confuse the users conceptual model of what they were actually controlling in this interaction.

I set up the X Y and Z position values for the left hand to be sent through Wekinator. I then used this data to train 4 different spatial positions for the left hand and connect them to parameter settings for the 3 chosen plugin parameters that would result in 4 very different states for the plugin. Wekinator allowed me to use machine learning in this way to train complex interpolation of the left hands positional data to control the 3 chosen parameters of the Frequency Echo plugin in a non-linear and complex way.

As an audio source I chose two different audio files for two different tests (which can be heard in each respective video demo of this interaction). I looped the audio so that there was constantly audio to manipulate with the plugin.

Item 7: Techinical Notes for Experiment 2.2

Formant Filters

Prototype 01

This prototype was more concerned with simple limited control actions being used to manipulate a sonically complex output:

  • opening and closing the fists changes the vowel shape of the sound (continuous interpolation between presets of the formant frequencies for 3 distinct formants)
  • y-axis position controls the frequency
  • the sound volume is constantly ON when any hand is present, and OFF when both hands are absent

Prototype 02

This prototype explored a combination of varying styles of control to create a complex interface:

  • moving either hand beyond a set velocity threshold triggers a 400ms segment of sound to be played (polyphonically)
  • moving the x-position of the left hand interpolates between different presets of shapes for a volume envelope and a frequency envelope
  • vowel shape is still controlled in the same way, but preset vowel shapes are now distinguished as 10 discrete points in the scale that are smoothed between so that it is easier to stay on a particular vowel

Item 8: Technical Notes for Experiment 2.3

Concatenative Corpus Explorer

  • the x-y position of the left hand moves around the 2D map of audio slices from the corpus, determining which plays on a loop
  • opening and closing the right hand controls volume: closing in a fist silences the interaction
  • opening and closing the left hand changes the length of a stereo delay line (this pitch shift when fluttering the fingers felt aesthetically satisfying)
  • rotating the right hand fades in a low-pass filter

I included three corpuses in the patch to choose from: a small ‘glitchy’ library that had quite smooth synthy sounds, a ‘friction’ library that was more rough and had some airy movement sounds, and a ‘mechanical’ library that had more clicky buttons or metallic bell like sounds. All of these recordings were ones I had recorded myself for previous projects.

I was quite happy with the filter implementation here - instead of mapping the hand rotation solely to the cut-off frequency of the filter, I mapped it to both the cut-off frequency and an overall dry/wet mix, so that the cut-off could be controlled in a specific range, whilst still having a completely dry signal when the hand is in it’s ‘un-rotated’ position.

Item 9: Technical Notes for Experiment 3.1

FM Synth

I had a tough time figuring out the velocity control of volume. Failed attempts can be seen/heard here:

Appendix A - Video 4.1: Velocity-Volume Mapping Attempt #01

Appendix A - Video 4.2: Velocity-Volume Mapping Attempt #02

Appendix A - Video 4.3: Velocity-Volume Mapping Attempt #03

Eventually I managed to find a combination of very specific smoothing systems (using both line~ objects and low-pass filters to smooth the data in signal form), as well as a system that started timing whenever a hand went out of view, and faded the volume and all other parameters out once the hand had been absent for approximately 300ms. This 300ms delay prevented the system from cutting the volume every time the camera lost sight of the hand momentarily, which happens more often when the hands are moving fast.

I made each hand control a distinct synth voice, as this seemed to follow a physical logic that they are separate beings. I did however give them the same basic controls so that there was a coherent sense of what ‘a hand’ sounded like. To give a bit more characteristic separation to each hand I gave them different parameter ranges - again this felt physically logical to me as each of our hands is very similar but feels clearly distinct in character. I hoped that following this form of physical logic the sound would feel more of a logical extension of the body rather than an arbitrary sound being controlled by it.

I also mapped a low-pass filter on each synth to how closed the hand was. This felt appropriate as I wanted some kind of sonic way to distinguish the hand being open or closed, as these feel like such different actions physically. I thought that by mapping a low-pass filter to the hand closing, it would further create the sensation that the sounds are coming from the hands, and therefore when they are closed the sound comes out ‘muffled’ in this way.

To add a greater sense of sonic interest I added in a system of LFOs that only affect the sound when both hands are present.

Item 10: Technical Notes for Experiment 3.2

Scrubber Explorer

I added in a randomising mode that I hoped would provide a more playful experience where the user wouldn’t need to interact with the max interface as much with the mouse. In this mode, each time a hand closed fully, the position in the audio buffer that it is controlling will randomly change.

I enjoyed manipulating playback rate in this interaction as playback rate also has the added benefit of affecting multiple qualities of the sound through a single parameter, i.e volume, pitch, and timbre.

Each hand’s velocity is mapped slightly differently to playback rate so that even when they are both using the same point in the same buffer, they have a slightly different character and feel.

I also added a stereo delay line to give the sound more of a sense of movement and width. I also had the open and closing of the fists control the length of this delay line as I find the pitch shifting this results in very descriptive of the action of fluttering ones fingers

Item 11: Technical Notes for Experiment 3.3

Magic Spells

During the charging phases of the spells, I used synth systems such as those in experiment 3.1 and 3.2 to create sounds that would respond in real-time to the hand moving around the interaction space, whilst also controlling some of the parameters with a time based envelope that would make the sounds change and become more intense over time even if the hand wasn’t moved, to reflect the growing power of the charging spell. I also added in features such as LFOs and filtered noise that only begin to affect parameters once a spell is fully charged, so that if a user continues to hold a spell after it is fully charged, this kind of turbulence in the sound parameters reflects the unstable energy of the spell that is waiting to be released.

Then for the release of the spell I designed transient samples comprised of 3 layers. Each layer aimed to add more weight to the sound, and either 1, 2 or all of the layers would be triggered depending on how long the spell had been charged. This aimed to support the sensation that a spell that had been charged for longer was more similar but more powerful to the same spell charged for less time. Each layer had slight variations so that multiple spells cast in succession would still sound varied.

Item 12: The Failures of this Project

The Failure to Investigate Different Interfaces

  • At a few stages in this project I have encountered failures that have shifted my path and informed my process. The first of these was the failure to expand the scale of the interactions. Originally, I had wanted to explore physical sound interactions more broadly by using and exploring multiple kinds of physical sound interface and investigating the joyful possibilities of each. However, I had a bad experience attempting to integrate a VIVE VR setup which included tracker objects that could be accurately tracked within a space. This was mainly due to limitations with personal computer processing capability, and also not having access to a space that I could easily have this system set up in for long periods of time whilst still being able to debug it and design for it whilst it is set up (the system required a large empty space where cameras could be set up). However, this failure also ended up helping me to hone and focus the scope of the project. I realised as I was struggling with the physical and technical difficulties of this new set up that I was also struggling to conceptualise in my head how I would transfer my work into a new interface. This made me realise that for this project I was much more interested in the technical specifics of designing sound interactions for an interface, with the focus being on how I negotiate the data and input/output possibilities. Therefore I decided to keep my focus on a single interface for this project, so that I could prioritise exploring the many ways to design for a single set of inputs, rather than getting overwhelmed or distracted by setting up many different interfaces only to investigate each less thoroughly than I would otherwise be able to.

The Failure to Package the Interactions Into a Single Application

  • Another failure was to create a streamlined package in which the sound interactions could be experienced through a single software application. Once I had created a large number of interactions in max MSP that I was happy with, I wanted to try to take them to the next logical step in terms of being easily accessible and usable by members of the general public. To me this involved creating a single Unity Application that would allow you to select from a number of interactions that I had designed and use them with the LeapMotion without having to open any other software. I approached this by adding the libpd wrapper plugin made by Niall Moody to my Unity project that already had the LeapMotion plugin working. I remade a few of my max MSP patches in pure data and imported them into the Unity project. This had initial success and even allowed me to play with some visual possibilities that Unity offered (see a demo of one such experiment here: Appendix A - Video 9.3: Pd + Unity Record Scrubber. However, I encountered issues with the Unity project where the builds would not track the LeapMotion hands, and the editor would crash extremely regularly when updating code. I began to debug this, however after a few days of continued failure to find and fix the problem, I decided to leave this part of the project for future development as it was costing a huge amount of time and when I analysed why I was doing it, it was more focused on wrapping the project up in a software that made it feel neater and easier to engage with than actually continuing my investigation of joyful interactions. Therefore, whilst I still want to develop this aspect further, it was beyond the scope of this project and was therefore discarded.

The Failure to Run User Studies

  • The biggest failure, in my opinion, of this project was to not integrate a more signifcant form of study within the project where I gather responses to the various interactions from a group of people who are not me. I was aware at various points that this was a possible route, but decided early on that due to the contentious nature of defining joy it would be most useful to first focus on my own experience and tastes. I still stand by this decision, as I think as a designer it is important for me to establish my own tastes and experiential preferences in order to inform my own artistic style. However I do feel that having established my personal preferences, the experiments would have really benefitted from feedback from a wider range of users who could shed light on how these attempts at creating joy might be interpreted by others who might have less knowledge of the project and its aims. This was especially apparent once I ran the limited study that I was able to where I had two participants test play 5 of my interactions. Even this small study was really eye (and ear) opening, and made me feel a lot of new excitement for the project.

Item 13: The Future of This Project

Development For Software Users

I would like to develop these interactions by building them into a single software, most likely made in unity, that can be hosted for free on sites such as itch.co where any interested user with a LeapMotion will be able to download it and try out the interactions. I think this would be beneficial to the documentation of the work I have done thus far, as a built unity application will be more stable over time than the rest of the patches that I have made so it will more concretely ‘freeze’ the interactions for future reference.

Development for Personal Practice

Having started to touch upon more aesthetic issues near the end of this project, I would like to dedicate more work into taking what I have learnt from these experiments and using them to make a piece or installation that has a more distinct artistic aim for me creatively. I can imagine this taking the form of both performances with the LeapMotion, designing interactions that are meant to be performed by me, and also installations where I design interactions around a specific artistic theme or idea that are meant to be interacted with a general audience within a space. I think this move from a more general experimental model into a more creatively driven artistic practice will be beneficial for informing my design process further. As Perry Cook suggests when designing digital musical interfaces: “Make a piece, not an instrument or controller” (pg.229) - suggesting that a focus on the instrument can make you lose sight of what you actually want it to do or how you want to use it, whereas designing an interaction for a specific piece or performance will inevitably result in an instrument through the process.

Development for Different Interfaces

I would like to take some of the principles on joyful interactions that I have found from these experiments and try applying them to different interfaces - and combinations of interfaces. The interface definitely influences the ways that I interpret these design principles, and so I think that further investigation into how these principles exist/manifest (or don’t exist) within other interfaces would help to strengthen my understanding of how these physical sound interactions can be thoughtfully designed.

Item 14: My design take-aways

  • Responsive systems are joyful

    • When a system responds quickly it feels more connected to me and I feel joy for being able to influence it.

  • Direct mapping of energy in the input to energy in the output is joyful

    • The connection of energy in my gestures through velocity to a parameter such as volume that fundamentally requires greater energy at higher values is very satisfying. It allows for appropriate scaling of the interaction, where if I make a small energy gesture, the system will respond appropriately. I find this joyful because the mimicking of energy between the system and myself allows me to see more of myself within the sound output, allowing me also to project my intended emotions or character that I put into the input, in the output. This makes the interaction feel very expressive to me, which to me is joyful.

  • The ability to construct/sculpt phrases is joyful

    • I returned on multiple occasions to the mapping of volume to how open or closed the fist was. This felt like it was because it really gave me the impression of control over sonic phrasing. For me this expressive control over broad gestures was more joyful than minute control over smaller details in the sound output. I think of this as the difference between being able to construct sentences/phrases in speech vs. being able to change the accent of speech. Both have influence over the meanings and emotions of the final output, but to me being able to construct phrases that could then become a series of phrases opened up joyful expressive possibilities.

  • Aesthetic connection between input and output is joyful

    • Here I refer to the connection between an input and output that feels right. This could be aesthetically in terms of some projected meaning such as my Magic Spell interactions, where the magic sound output feels like it justifies the physical actions that mimic casting a magic spell and vice versa, or it could be aesthetically in a more abstract sense where the physical action and sound feel sympathetic in character. For example the mapping of a closed hand to a low-pass filter, where the action of closing the hand feels similar to the way a low-pass filter sounds when applied to a sound.

There were many other insights into physical sound interactions that I gained, as documented in the experimental evaluations in this write up, but for me these points above stood out as pivotal ideas for me that have influenced the way that I analyse and design these kinds of interactions. Moreover, these are the ideas that helped me grasp the ways that I experience joy within these interactions. Joy is undoubtedly a subjective experience that will mean different things for different people at different points of their lives, which is exactly why for me as a designer it is so useful to take this research opportunity to break it down a bit and analyse personally what joy means to me now within these interactions. Having made these conclusions, I look forward to taking these concepts into my future design work and seeing how my understanding and experience of them changes over time. I think it was always an over-ambitious goal to capture the essence of joy in a way that could be easily identified and bottled for easy access, but this project has been an excellent exercise in taking an emotional and experiential end goal, exploring and experimenting the many ways in which I might achieve it, and importantly evaluating why they do and don’t work (for me and possibly for others).

Item 15: A Note on My Role Within This Project

  • My role within this project has been a contested one. I approached it first thinking of myself as a sound designer, and then over the course of the project began to think of myself more as an interaction designer - taking into consideration much more than just sound but also the interfaces and the people who were intended to use them. In reflection of this project I think my role would probably be more accurately described as software developer/engineer/designer. Whilst I still consider myself a sound designer, this project really spent most of it’s details on how I could leverage software and data processing to achieve sound design possibilities, rather than getting into deep exploration of the sound design possibilities themselves. Similarly, whilst I did consider the user within all of the interactions, I didn’t expand my project to include the human testing that might be involved in rigorous interaction design. Ultimately the user considered within this project was myself. This limits the project, but for myself it has been greatly useful in identifying the aspects of these interactions that are joyful to me within the design process itself: the software and data mapping. So whilst I prefer the multidisciplinary label of interaction designer who might be engaged in tasks that range through sound design, software engineering, electrical engineering, and other areas, I acknowledge that in this project I have largely worked as a software engineer, thinking on how the software components can be designed to enable certain interactions that I find joyful.

Item 16: Techinical Note on How I Connected the LeapMotion to Max MSP

When I first managed to get hold of the LeapMotion camera, I couldn’t find any immediately obvious way to connect it to max MSP. I found some references to old max MSP external objects that had been made to interface the two, but these were 5-6 years old and I couldn’t get them to work (various forum posts suggested others had had the same issues).

UltraLeap, who make the LeapMotion, do offer free plugins for Unity and Unreal however which allow you to connect the LeapMotion to these game engines.

I therefore set about making my own interface application in Unity using the LeapMotion plugin, and Thomas Frederick’s plugin for connecting unity to OSC. The results of this can be seen in Appendix D, Item 2, with an accompanying readme file in Appendix D, Item 1.

I was really proud of this development, creating child classes from the classes shipped with the LeapMotion plugin so that I could easily adapt the code to send data via OSC without having to seriously tinker with their code too much directly (the only change I ended up having to make in their code directly was changing some variables from ‘private’ to ‘protected’, so that they could be accessed in the child classes that I made).

I also used this as an opportunity to exercise a bit of software design logic, trying to make the application as intuitive to a new user as possible - even if it still feels fairly clunky due to its basic visual and UI design.

After I had established this system, I later discovered that I had somehow missed the existence of the GECO application which is a now free application designed specifically to connect data from the LeapMotion to MIDI and OSC with a user friendly interface and all.

This was disappointing to see that I could have saved myself some work, however I did actually love the challenge of connecting the data in such a manual hands-on way. I also think that ultimately this helped my designs.

By going in and finding and sending all of the data values manually, I became much more familiar with the nautre of all of the data coming from the LeapMotion and it’s ranges and behaviour etc.

I think this is a beneficial approach because it helps bypass some of the bias caused by the apparent affordances suggested by the data being already packaged for you in a specific interface. This was discussed a bit by Yann Seznec during a feedback session for a work in progress of this project during the Future Flavours of Sound Festival 2022.43 Yann mentioned how it is possible with these kind of interfaces to not notice how much they are suggesting ways of use to you through the data that they make available. For example, when they give you the velocity of the hand as a value, you feel like you should use it. Yann brought this up to emphasise the importance of stepping back and really thinking carefully about why I am choosing certain input values as controls, and how they are serving the interaction aesthetically and practically (if at all!).

Therefore, whilst this technical excursion within the project was slightly superfluous I think it served a useful purpose in informing my design approach, both as an exercise in designing an interface in Unity and as a way of thinking of data in a more RAW basic way to allow for more thoughtful and careful design.

Item 17: A Note on Joy Through Surprise

The joy through surprise noted in Experiment 2.1 made me consider how surprise is joyful when the output exceeds our expectations in some way. From my experience, the surprise of having less impact on the sonic output than expected with an input gesture is more disappointing and frustrating, whereas the surprise of having more impact on the sonic output than expected is much more funny and joyful.

This feels like a fairly obvious observation, but still one I find difficult to justify concretely. There can also be humour and joy sometimes in the humility involved in an interaction when you try to do a huge elaborate action and fail in some way. This could be seen as an example of joy through the surprise of having less impact than expected.

However, in general from my experience with the experiments within this project I have found that for a surprise to be joyful it must be surprising due to exceeding expectations.



Appendix F: Quotes

Section 1: Playtesting Quotes

This section includes some quotes taken from interviews with Andrew and Lyla after they had tested each respective interaction - these quotes are extensive but still selective. For the full verbatim interviews, see the audio recordings in Appendix B.

Andrew - Scrubbing Explorer

  • “…creative, I feel like you’re allowed to be really creative while you’re playing with them because there are so many options.”

  • “interesting, it’s a really interesting way of controlling audio.”

  • “I really liked being able to find a specific part of the audio file that you could then control… then that being randomised I feel like keeps things really interesting and really fresh each time you start moving around because its like really playful.”

  • “Not being able to cross my hands over well… I found that kind of annoying”

  • “It definitely reminds me of like playing with records and like trying to figure how to do live scratching with records which is always super fun”

  • “The singing bowl one on its own I feel like is not as fun to mess with because it’s such a constant sound… but the plastic bag and the paper tearing because they’re so like textural… it makes you feel like you’re really controlling every little particle that happens within the sound which is really fun”

  • “I definitely feel like it’s for me definitely more of like a creative tool, like I can definitely see myself trying to use this in combination with like other musical tools to have like a sense of randomness in a performance or some kind of aleatoric motion… I feel like that would be really fun and really captivating for people to watch as well, and hear”

Andrew - Piano Clouds

  • “I’d say that was very relaxing… it’s kind of meditative in a way”

  • “I really don’t think this one’s as much of a creative tool”

  • “I don’t think it’s as playful as the other one… [the Scrubber Explorer interaction] even though it’s very fun to play with, I don’t think of it as playful. It’s more of like an experience, of being able to waft your hands through the notes of the piano randomly”

  • “…because it’s randomised it definitely makes it feel not like an instrument because you don’t really have as precise control over it either, so it definitely feels more like an experience than playful, creative, or like an instrument I think”

  • “…isn’t really like a game but it is playful”

  • “I really liked trying to make musical phrases with it which was harder than I thought it would be because [I would] try to barely move and just move one finger and then sometimes it would only play one note but then sometimes it would play like four notes. So like playing with that to try to like see how I can control the flow of the notes was really interesting”

  • “I also liked a lot that it was triggering the bass notes and the high notes at the same time so you didn’t really have control over that but it made it feel more like a musical piece that you’re like controlling rather than actively composing I guess”

  • “I think [not having complete control over how many notes were played] was very fun”

  • “the thing I found that was kind of frustrating was when you changed the left hand to change the chord… when you triggered any notes while it was changing it would be kind of weird and kind of break the relaxing experience of it a bit”

  • “…if you take away your fist [that controls the chord type] it doesn’t always land on the last [chord] that you had open, so that was kind of fun as well to mess with… I definitely liked that random aspect of the chord system”

Andrew - Formant Filters

  • “Cacophonous”

  • “It takes a lot of effort”

  • “it’s like you’re trying to argue with someone who doesn’t want to listen to you”

  • “…each of your hands are arguing with each other and you just can’t understand anything that’s going on”

  • “it like mimicking how much energy it takes to argue with someone”

  • “I really like and dislike how difficult it is to control the vowels”

  • “especially because each hand does its own vowels it really gives your hands individual characteristics, it makes them feel like two very different objects” (incorrect conceptual model)

  • “like [your hands] are going against each other rather than working together”

  • “it’s more on the playful side”

  • “I do think it’s fun to try to make your hands argue with each other”

Andrew - Concatenative

  • “I think I just made the coolest piece”

  • “I think this one might actually be my favourite so far out of all of them… I think what I like most about this one… is when you can see the point cloud with this one, its almost like you have a map of planets and each planet has its own different ambience to it, and you can pick which planet you fly to”

  • “what’s nice about it is how it’s limiting, you can’t just [go to the point you want]”

  • “you can be like ‘oh this planet sounds really nice in contrast to this one’, and try to like jump between the two. And I feel like that motion is really fun”

  • “the way you control the filter in this one is really intuitive I feel… it’s almost like you’re holding a big ball and depending on how you move the ball around the space - whether it’s a big ball or a small ball - and how you twist your hands around it, it creates the sound in that way”

  • “I feel like this one works really well the way it is”

  • “it’s slightly annoying that you can’t just pick exactly where it is [the position that determines the playback slice of audio] but also that’s like kind of one of my favourite parts about it”

  • “the pan position wasn’t really easily controllable but I didn’t really need it to be”

  • “it feels less like an audio tool… I would say this one feels more playful”

Andrew - Magic Spells

  • “that one’s so fun, that one’s really fun”

  • “I like how it has the slight bit of control and randomisation… even though it’s charging the same spell… the slight variation makes it much more engaging”

  • “this one’s definitely I think the most fun out of all of them… I don’t know if it’s my favourite one but I think it’s the most playful one for sure”

  • “it’s very satisfying to have almost full control over something… because it’s so simple I think that makes it really satisfying”

  • “I feel like the controls in this one are simpler than the others… it has very few things that you can tell that you’re controlling at least, regardless of how many things are going on under the surface”

  • “it really feels like you have this power in your hands now through this one, I don’t know if it’s because of the sounds or the connotations of magic spells… but it’s very fun to just [makes spell noises] you really feel like you’re just throwing spells at somebody”

  • “if it loses track of your hands [it’s frustrating] - I’m like ‘aw damn it’… I just wish I didn’t lose track of my hands… just hearing the charge of the spell die down is just like [makes disappointed noise]”

  • “I really like the different charging times of the different spells”

  • “it’s fun to play with how you can combine the two spells”

  • “I definitely feel like this one feels more playful or like a game”

Lyla - Scrubbing Explorer

  • “it’s kind of like an investigation”

  • “it’s very different from playing an instrument you can touch or a tool you can actually move”

  • “it kind of takes a while to figure out how far you can [move your hands]… it becomes more intuitive as you go”

  • “it’s kind of similar to how you would have to change the pressure of a bow on a string, when you’re playing on strings of different thickness on a stringed instrument… where you have to adjust what you’re doing according to which string you’re on, so in this case it felt analogous to how you’re changing between the different files”

  • “I enjoyed figuring out the boundaries of what you can do… because you have so many settings, it’s simple in terms of what’s there, but then because of the number of endless combinations you can have”

  • “you would expect it to be more consistent, but because sometimes I found that it wasn’t like perfect (or maybe I was doing something wrong), it was really cool to hear a sound that I was expecting come out slightly different. So figuring that out as I played with it was really, really cool”

  • “it was really cool to sometimes combine [the sounds]”

  • “one thing I didn’t like was… sometimes when you move your hand too fast… sometimes your hand would disappear”

  • “similar to the frustrations of like figuring out the limitations of, for example, the violin… I need to figure out the right pressure and technique to avoid having that sound that you don’t want to maintain that consistency”

  • “consistency was one thing I wasn’t like very comfortable with… but that’s what I kind of like about it too, it kind of breaks your expectations”

  • “it felt like I was playing around with the pedals of a harp… I enjoyed it in that way, when I was doing it I didn’t feel like I was just pressing buttons to get a sound… I needed to create the sound myself so I needed to figure it out and get familiar with it”

  • “you couldn’t really hold tones for very long which is another thing I wasn’t very happy with, but it was really cool because it wouldn’t last so you would have to depend on the velocity of your hand and that sort of action, and know that you’re not going to get it to go on forever. So that was really fun to play around with”

Lyla - Piano Clouds

  • “it was meditative, because I guess there weren’t as many options [as the scrubber explorer]”

  • “I felt more like I was playing with like a Brian Eno iphone game [Bloom by Brian Eno and Peter Chilvers]”

  • “it felt like the world I was in was smaller and more simple”

  • “[my favourite thing was] thinking if this was a composition, how would I change the chords according to the emotion I want”

  • “its not very jarring, the way the tones are played are more monotonous… I really enjoyed that”

  • “I really like that you organised them into chords… I find that it’s easier to associate feelings and colours and like a world with a chord, so being able to play around with these specific chords… it felt really nice to experiment with the changes”

  • “if you move too fast then it sounds like you’re hitting the same note like three times and that impact, I just found it a little bit annoying because - one, it’s not natural sounding to what an actual piano would sound like, two, the same note being hit multiple times when it’s not in character with the rest of what’s going on… I feel like it’s meant to be more of a peaceful experience… but those moments would catch me off guard”

  • “it felt nice that I didn’t have to think as much with it, I can kind of just loose myself in the tool rather than think about what I’m doing with the tool”

  • “it felt creative in that I could change how it made me feel, with the chords and the speeds. But other than that it felt meditative because it was so simple”

Lyla - Formant Filters

  • “it was eye-opening in that it kind of made me think about the shape of my own mouth when I’m speaking vs. the shapes that I’m making with my hands and the vowels that are being made”

  • “I found this a little bit more confusing compared to the other ones because it took a lot more getting used to the controls to be able to make specific sounds.”

  • “it feels more complicated compared to the other ones… not because of what’s there but because of what I’m having to do with my hands”

  • “I enjoyed the vowel changes… I wasn’t sure if I was doing it right… it felt more natural to do this [rotated hand]” (compared to the mapping of fist to vowel)

  • “it was really cool to hear human voice like sounds, and being able to manipulate it on the spot”

  • “when you get to a certain spot it loses the hand and then it’ll just go back to doing like [a different vowel sound] rather than whatever other vowel I was actually doing which was slightly annoying. Also the fact that I couldn’t lengthen [the sound envelope]”

  • “Once you understand the association of the shape of the hand with the vowel it can be really cool to play around with”

  • “it felt more educational… I felt like playing around with it made me more attentive to how subtle changes even with this tool could change how something sounds so easily, and it made me think about the phonetic alphabet and my own mouth and everything. So it kind of made me pay more attention - in that sense it was eye-opening”

  • “the association I had with [the sound] felt more personal because it’s closer to a human voice, so in that sense the tool felt easier for me to understand in terms of like conceptually what it is but in terms of like how to control it, it took more effort. But I feel like if the controls were a little bit more fluid it could be even more educational and more of an experimental tool”

Lyla - Concatenative

  • “I’m a very big fan… I really enjoyed it”

  • “it was fun because it felt like you were exploring a map - a topographical map but instead of everything being kind of 2D it’s 3D because the translation is like in your hands. So it felt like you were kind of extracting that out of the computer and then making it into a 3 dimensional world which I really, really liked.”

  • “I kind of like having the [visual] map because it makes it easier for me to navigate vs. like the vowel one you really had to listen”

  • “I like the variation in all the different sound options there are”

  • “I love the filter… it was very consistent throughout all of the settings and… the freedom that I had and the sensitivity, it works well with the sensor - I felt like it was easier to control in this one”

  • “it gives you the liberty to explore each dot [audio slice] separately and then there’s also that like bigger motion of experimenting with this map as a whole”

  • “it would quickly inform you what each of the settings do - like what the filter does to the sound”

  • “it felt more like I was playing with it this time compared to the other ones”

  • “I feel like when you feel more freedom… the more intuitive and easier it is to play with a sound with an instrument, the more you can enjoy it. Like it doesn’t feel good to play the violin when all you can create are screechy noises… the satisfaction comes when you’re able to sub-consciously have all of the foundations set in place… when you can actually create a sound and then you can play with a sound to make it misbehave or behave… in that way because it was so much easier to access all of the different features of this program, it felt more fun for me - it felt like play.”

  • “the fact that it’s so much easier to hear the effect [of your actions] …that makes it so much easier”

Lyla - Magic Spells

  • “it was satisfying, but in a different way to [the concatenative interaction]”

  • “it’s playing around with like the anticipation of a sound or like the result that you expect”

  • “I immediately enjoyed how audio-visual cues from the Magic Spell Interaction subjects you to a pressuring push-and-pull sensation, playing off of the excitement of anticipation, until you, yourself, experiment with the intensity and timing of the release to achieve a “desired” sound. I also appreciated the option to explore the amalgamated and separate outputs from the L/R hands and felt particularly excited upon discovering the very wet mechanical sounds as I moved my R hand further away from the sensor and found myself comparing their profile to that of other sounds.”

  • “I was slightly annoyed by how the sensor did not always pick up what I was doing with my hands, resulting in the perturbed access of the extra noises or the unsatisfying experience of withering sound from a semi-charged “battery” when the hands were no longer registered by the sensor. I also felt like I wanted to have the freedom to orchestrate the charging speed of the respective batteries, which was not available. However, it was cool to notice the change in my emotional response to these issues over time, as I was able to positively accredit the subtle frustration for making the interaction feel more game-like, eventually sparking a very different sense of joy.”

Section 2: Future Flavours of Sound Festival Feedback

This section includes some quotes taken from Yann Seznec and Jacob Sachs Mishalanie during the Future Flavours of Sound Festival 2022, where I shared videos of the FM Velocity Synth and an early prototype of the Formant Filter interaction. These are some selected quotes from the direct feedback I got from Yann and Jake. For the full discussion you can see the video of the event here (the discussion of my work falls between approximately 01:10:44 and 01:36:30).

  • “oh man, it’s really nice that it’s like, I guess, velocity based? …it’s amazing how simple that feels, and yet it’s very effective because it means you can hold your hands still and there’s no sound and that’s a very strong sonic interaction kind of thing was actually managing silence. Because managing silence in anything that involves movement is very hard - especially anything that involves direct mapping of movement… so that I thought was really effective.” (01:16:22) [Yann Seznec]

  • “Leaning into [fist opening and closing] was a good choice because from a sound design perspective and a sound interaction perspective, closing your hand for me definitely has that kind of filtering feel… it works - that connection is very strong… you don’t even notice it happening because it’s such a strong interaction.” (01:35:14) [Yann Seznec]

  • “…the vowel thing, that was fucking awesome… that was something I had not expected at all.” (01:19:18) [Yann Seznec]

  • “…if you want to sort of like make it fun or something, it would be cool if this feels interactive between multiple people… it could be much more fun if two people get up and then they have to figure out how to do something together.” (01:24:12) [Jacob Sachs-Mishalanie]

  • “…it might be interesting to have some more things happening in the space that you’re interacting with that are a little bit harder to understand so that there’s a little bit of a discovery game happening.” (01:27:19) [Jacob Sachs-Mishalanie]

Section 3: Some Influential Quotes from Relevant Literature

This section includes some further quotes that I noted from a few selected texts that were particularly inspiring to this project. They are by no means extensive of the reading that influenced this project, but as I had them ready at hand in digital format I am including them here largely for my own future reference.

3.1: ‘Artful Design’ by Ge Wang

  • “Principle 4.5: Design things with a computer that would not be possible without! Do not simply copy, port, digitise, or emulate. Rather, create something novel and unique to the medium – something that could not exist without it… Design to the medium!” (pg.181)

  • “The sublime is not a feature to put into a product, but a consequence of experience. To design artfully is to design with authenticity, to shape things that strive for deep beauty: a harmony of form and function in search of truth, clarity, an ideal, and out common humanity. Transcending sheer utility, sublime design strives to understand who we are and who we want to be.” (pg.50)

  • “…there is great joy in the crafting of a tool and the implicit set of ideas it embodies – presenting a different mindset while hiding complexities that the user shouldn’t have to think about.” (pg.200)

  • “Aesthetically [a software system] represents a way of working, playing and living with computers and the things we make.” (pg.203)

  • “Design connects the medium to the message. The art of design draws from the essential qualities of a medium, to create something that would not and could not exist as meaningfully in another medium. The medium should become the message so completely that the medium seemingly melts away, leaving only the essential message - and the illusion that the medium doesn’t matter. However, this is only an illusion, for the medium matters fundamentally: it is how we unfold the message.” (pg.205)

  • “In a time of rapidly increasing automation, the artful designer must be cognizant of situations in which it is essential to design the human into the loop. Interfaces ought to extend us, make us feel a sense of embodiment in their use, giving us new hands to interact with the world around us.” (pg.206)

  • “All musical instruments are interfaces, a kind of implicit contract of interaction between a player and an underlying mechanism of sound production. The interaction between them manifests itself as an active, ongoing feedback loop – a dynamic process, one in which we are constantly evaluating the results of our actions, ever fine-tuning the relationship.” (pg.207)

  • “…the most effective and elegant interactions are the result of interfaces that mediate and seamlessly bind the user and the artefact into a single system.” (pg.207)

  • “The result may reside below out conscious notice, but interaction always induces a consequence of experience, a flavour to the encounter.” (pg.208)

  • “The aesthetics of interaction lie in the elegance and expressiveness of use.” (pg.208)

  • “Instrument design in an extreme form of interface design. Deceptively tricky, it demands both simplicity of interaction and the potential for complexity and richness in its output.” (pg.208)

  • “An instrument is a tool: it exists because it offers at least one core aspect, however subtle, that it does better than anything else.” (pg.209)

  • “…it was hands that were the working surface, the hands that felt and manipulated the universe, human beings thought with their hands. It was their hands that were the answer of curiosity, that felt and pinched and turned and lifted and hefted. There were animals that had brains of respectable size, but they had no hands and that made all the difference.” (Isaac Asimov, Foundation’s Edge)

  • “The many types of gestures our bodies are capable of making give rise to technologies to sense them; they are the building blocks of interface design.” (pg.211)

  • “We tend to think of computers, algorithms, machine learning, and artificial intelligence as pure automation that produces some output – and it’s easy to overlook their potential for human interaction, but many things computers can do can be fundamentally improved by placing human intentionality, tacit knowledge, instinct, and aesthetics into the interaction loop – embodying the idea that computers ought not be replacements but extensions of us.” (pg.218)

  • “authentic synthesis of technology and the human is not only possible but truly interesting.” (pg.219)

  • Funny is often better than seriouswit is never a bad thing in design! It can offer humour, interest, commentary, or whimsy (an art in itself).” (quoting Perry Cook, pg.288)

  • “…I don’t believe in top-down design; I tinker, I make, I try to craft a piece – not an instrument (the latter naturally emerges out of necessity).” (pg.290, Perry Cook)

  • “…the programmability of computer-based music systems often makes them too easy to configure, redefine, remap, etc. For programmers and composers, this provides an infinite landscape for experimentation, creativity, writing papers, wasting time, and never actually completing any art projects or compositions!” (pg.290, Perry Cook)

  • “When people think of computers, they often think of computers being ‘smart’ and somehow adapting to you. I am less interested in an instrument that wants to learn me – and more in an instrument that allows me to learn it!” (pg.291, Perry Cook)

  • “…what is the instrument designer’s imperative in this age of technology? …it’s not necessarily to make generalised instruments that others can learn (such as a luthier or violin maker does) rather the imperative is for our own process.” (pg.298)

  • “Design as one’s own artistic exploration.” (pg.298)

  • “…programming as a design medium is now increasingly accessible to everyone – not necessarily part of a career, but more like playing an instrument for its sheer intrinsic joy and experience. It’s a creative tool – no more, no less!” (pg.299)

3.2: ‘The Design of Everyday Things’ by Don Norman

  • Simplifying levels of human processing into three levels - although “…a gross oversimplification of the actual processing, it is a good enough approximation to provide guidance in understanding human behaviour.” (pg.49)

  • The lowest level of human processing as “The visceral level” involving responses that are quick and subconscious (pg.50)

  • “For designers, the visceral response is about immediate perception… This has nothing to do with how usable, effective, or understandable the product is. It is all about attraction or repulsion.” (pg.51)

  • The next level of human processing is “The behavioural level,” which ” is the home of learned skills, triggered by situations that match the appropriate patterns.” (pg.51)

  • “For designers, the most critical aspect of the behavioural level is that every action is associated with an expectation. Expect a positive outcome and the result is a positive affective response… Expect a negative outcome and the result is a negative affective response… dread and hope, anxiety and anticipation. The information in the feedback loop of evaluation confirms or disconfirms the expectations, resulting in satisfaction or relief, disappointment or frustration.” (pg.52)

  • “Behavioural states are learned. They give rise to a feeling of control when there is good understanding and knowledge of results, and frustration and anger when things do not go as planned, and especially when neither the reason nor the possible remedies are known.” (pg.52)

  • “Feedback provides reassurance, even when it indicates a negative result. A lack of feedback creates a feeling of lack of control, which can be unsettling. Feedback s critical to managing expectations, and good design provides this.” (pg.52)

  • The third level of human processing is “The reflective level” which “is the home of conscious cognition.” (pg.53)

  • “Reflection is cognitive, deep, and slow. It often occurs after the events have happened. It is a reflection or looking back over them, evaluating the circumstances, actions and outcomes, often assessing blame or responsibility. The highest level of emotions come from the reflective level, for it is here that causes are assigned and where predictions of the future take place. Adding causal elements to experienced events leads to such emotional states as guilt and pride (when we assume ourselves to be the cause) and blame and praise (when others are thought to be the cause).” (pg.53)

  • “The behavioural level, which is the home of interaction, is also home of all expectation-based emotions, of hope and joy, frustration and anger. Understanding arises at a combination of the behavioural and reflective levels. Enjoyment requires all three.” (pg.54)

  • “…badly designed devices can induce frustration and anger, a feeling of helplessness and despair, and possibly even hate. Well-designed devices can induce pride and enjoyment, a feeling of being in control and pleasure - possibly even love and attachment.” (pg.55)

  • “One important emotional state is the one that accompanies complete immersion into an activity, a state that the social scientist Mihaly Csikszentmihalyi has labelled ‘flow’. Csikszentmihalyi has long studied how people interact with their work and play, and how their lives reflect this intermix of activities. When in the flow state, people lose track of time and the outside environment. They are at one with the task they are performing. The task, moreover, is at just the proper level of difficulty: difficult enough to provide a challenge and require continued attention, but not so difficult that it invokes frustration and anxiety.” (pg.55-56)

  • “The flow state occurs when the challenge of the activity just slightly exceeds our skill level, so full attention is continually required… The constant tension coupled with continual progress and success can be an engaging, immersive experience sometimes lasting for hours.” (pg.56)

3.3: ‘Cas Holman: Design for Play’ (Abstract Season 2, Episode 4)

  • “We don’t design the play, we design for the circumstances of play to arise.” (02:15)

  • “I like to start with kind of an experiential goal and then design the object or the system or the playground or whatever it is, design that around it.” (02:41)

  • “…there’s something I think wonderful about when the magnets don’t meet. And if it had the instructions built into it through colour coding you would figure out [what didn’t work] and then never do it wrong and never have that moment.” (08:56)

  • “The motto and kind of driving force [of Heroes Will Rise] is ‘easy is boring.’ ‘Easy’ meaning something that doesn’t engage your thinking.” (11:09)

  • “Lots of toys are very goal oriented. They look like something, there’s one way to play with it. So, you quickly find out it can go this way or that way and then you’re done. So it sort of shuts down that building on their own, pleasure and engagement and enjoyment.” (12:41) [Tovah Klein, Director at Barnard Centre For Toddler Development]

  • “That’s also the point of play is that there’s not an outcome other than that it’s intuitive.” (20:10)

  • “Any toy could be turned into a non-open-ended toy. The way I see it working against the child is that it says to them, ‘either you do it right, or you do it wrong.’ If you feel like you aren’t good at something, you just retreat. Children are curious. They want to engage with the world, and you give them open-ended toys, they’re much happier.” (26:25) [Tovah Klein]

  • “[An open-ended toy] says to the child, ‘your ideas are really important.’ Right? Because if you’ve created something and it was your ideas and now you carried them out, then it’s yours.” (27:49) [Tovah Klein]

  • “Good toys make good people.” (34:33)

  • “The driving ethos is ‘what were you curious about?’ And that’s such a big shift from ‘what did you learn?’ Right? That there’s an outcome, that there’s a thing that is to be learned, that we know what can be learned and therefore you’re out to look for it.” (41:17)

3.4: ‘Embodied User Interfaces: Towards Invisible User Interfaces’, Fishkin et al. 1999

  • “…the manipulation and the virtual representation are integrated within the same object, a paradigm which we term Embodied User Interfaces” (pg.1)

  • “…to minimise the cognitive distance between a task goal and the human actions needed to accomplish that task. We believe these interaction paradigms are on an evolutionary path towards an ideal of the invisible user interface,” (pg.2)

  • “There is coincidence of input and output in the device.” [in embodied user interfaces] (pg.3)

  • “…the user’s task environment should be embodied within a physical/computational device, and that this embodied task should be linked to an analogous real-world task.” (pg.3)


  1. Perry Cook, quoted in: Ge Wang, Artful Design: Technology in Search of the Sublime / Written and Designed by Ge Wang. (Stanford, CA: Stanford University Press, 2018). pg.288.↩︎

  2. ‘Punchdrunk’, accessed 21 August 2022, https://www.punchdrunk.com/.↩︎

  3. Janet Cardiff, ‘The Missing Voice (Case Study B) | Artangel’, accessed 22 August 2022, https://www.artangel.org.uk/project/the-missing-voice-case-study-b/.↩︎

  4. ‘Prof Atau Tanaka’, Goldsmiths, University of London, accessed 22 August 2022, https://www.gold.ac.uk/computing/people/tanaka-atau/.↩︎

  5. ‘MiMU | Home’, accessed 22 August 2022, https://mimugloves.com/.↩︎

  6. ‘Douglas McCausland // Official Website’, doug-mccausland, accessed 22 August 2022, https://www.douglas-mccausland.net.↩︎

  7. Onyx Ashanti, ‘Onyx Ashanti | Speaker | TED’, accessed 22 August 2022, https://www.ted.com/speakers/onyx_ashanti.↩︎

  8. ‘Michel Waisvisz – DIGITAL ART (1960-2000)’, accessed 22 August 2022, https://www.digitalcanon.nl/?artworks=michel-waisvisz.↩︎

  9. Donald A. Norman, The Design of Everyday Things, Revised and expanded edition. (Cambridge, Massachusetts: The MIT Press, 2013).↩︎

  10. Norman, pg.38↩︎

  11. Norman, pg.38↩︎

  12. Norman, pg.38↩︎

  13. Norman, pg.40↩︎

  14. For details on how this play testing was conducted and an evaluation of it’s limitations, see Appendix E, Item 2.↩︎

  15. Norman, pg.55-56↩︎

  16. ‘Cas Holman: Design For Play’, Abstract: The Art of Design (Publikro London, RadicalMedia, Tremolo Productions, 21 January 2017). (timecode - 00:11:09)↩︎

  17. Andy Hunt and Ross Kirk, ‘Mapping Strategies for Musical Performance’, 2000, 28. pg.251↩︎

  18. Andy Hunt and Ross Kirk, ‘Mapping Strategies for Musical Performance’, 2000, 28. pg.251↩︎

  19. Andy Hunt and Ross Kirk, ‘Mapping Strategies for Musical Performance’, 2000, 28. pg.251↩︎

  20. Andy Hunt and Ross Kirk, ‘Mapping Strategies for Musical Performance’, 2000, 28. pg.251↩︎

  21. Tovah Klein, quoted in: Holman, (timecode - 00:12:41)↩︎

  22. N.B. this patch was used in conjunction with a reaper project that contained looped audio samples and the Valhalla Frequency Echo plugin which I have not included in this project. Therefore this patch is not as ‘playable’ as the others in this project.↩︎

  23. ‘Wekinator | Software for Real-Time, Interactive Machine Learning’, accessed 22 August 2022, http://www.wekinator.org/.↩︎

  24. ‘Valhalla Freq Echo: Frequency Shifter Plugin | Free Reverb Plugin’, Valhalla DSP (blog), accessed 22 August 2022, https://valhalladsp.com/shop/delay/valhalla-freq-echo/.↩︎

  25. For a further analysis of this sensation of joy through surprise, see Appendix E, Item 17.↩︎

  26. Future Flavours of Sound Festival 2022 - Audio Programming and Technologies, 2022, https://www.youtube.com/watch?v=iCKRifnjdDA. (timecode - 01:19:18)↩︎

  27. For details on how this play testing was conducted and an evaluation of it’s limitations, see Appendix E, Item 2.↩︎

  28. Atau Tanaka, Embodied Sonic Interaction: Gesture, Sound and the Everyday, 2013, https://www.youtube.com/watch?v=IyOUVixqmTU.↩︎

  29. For details on how this play testing was conducted and an evaluation of it’s limitations, see Appendix E, Item 2.↩︎

  30. Atau Tanaka, Embodied Sonic Interaction: Gesture, Sound and the Everyday, 2013, https://www.youtube.com/watch?v=IyOUVixqmTU.↩︎

  31. ‘Embody’, Collins English Dictionary – Complete and Unabridged, 12th Edition, 2014, www.thefreedictionary.com/embody.↩︎

  32. Fishkin et al., ‘Embodied User Interfaces: Towards Invisible User Interfaces | SpringerLink’, accessed 8 August 2022, https://link.springer.com/chapter/10.1007/978-0-387-35349-4_1. pg.1↩︎

  33. Fishkin et al., pg.2↩︎

  34. Wang, pg.238↩︎

  35. Wang, pg.207↩︎

  36. Wang, pg.245↩︎

  37. ‘Sound Design(ed) Futures : New realities, spaces, technologies’, accessed 27 May 2022, https://lisaa.univ-gustave-eiffel.fr/actualites/actualite/sound-designed-futures-new-realities-spaces-technologies.↩︎

  38. To read more about this discussion, see my blog post on the conference in Appendix E, Item 1↩︎

  39. Future Flavours of Sound 2022 (timecode - 01:16:22), watch here↩︎

  40. Future Flavours of Sound 2022 (timecode - 01:35:14), watch here↩︎

  41. For details on how this play testing was conducted and an evaluation of it’s limitations, see Appendix E, Item 2.↩︎

  42. For details on how this play testing was conducted and an evaluation of it’s limitations, see Appendix E, Item 2.↩︎

  43. Future Flavours of Sound, watch here.↩︎