Luuma, Between Code and Controllerism

Chris Kiefer

University of Sussex (UK)

<http://dx.doi.org/10.12801/1947-5403.2018.10.01.15>

A watershed moment happened fairly recently that showed the last embers of a music production mindset that was ingrained since I was a teenager messing around with Cubase. I needed to carry out a studio task, so I naturally turned to some digital audio workstation (DAW) software, looked at it, shut it down and did the same thing in code in the audio programming language SuperCollider instead. It was just an everyday task, but it just didn't seem natural anymore to use the DAW. You might even say I livecoded this task; it wasn’t a performance, but I approached it performatively using the same style of coding I would typically use in a performance, sculpting a program in real time until it did the right thing. Doing it this way seemed to be more straightforward than using the DAW. My way of thinking has shifted; I prefer to express musical ideas algorithmically. This isn’t to say that DAWs aren't amazing tools, they just don't fit so well with how I tend to think.

What follows is a mixed bag of thoughts and reflections on my experience in livecoding, an account of a slow perspective shift from controllerism into habitual livecodeism, and an underlying story of generally failing to find a way to combine these two things well. A quick background: I’m a committed coder either way. I did a computer science degree and have worked as a programmer and coding comes naturally as a means of being creative with computers. I’ve always programmed my own musical tools, starting with live MIDI sequencing software in the 90s. Code was always part of music for me but rarely during performance (unless you count fixing a mid-gig “blue screen of death”). I played at my first algorave in 2013 in Brighton, although I didn’t do much coding on stage other than run a mass of SuperCollider code at the beginning, before playing the gig with various controllers. Since then I’ve experimented with different combinations of controllers and code, some of them more esoteric, semi-ridiculous or successful than others.

In my view one of the great things about algorave culture is the informality and freedom to experiment. Audiences tend to arrive with more curiosity and fewer expectations, so this has encouraged me to try out some creatively high-risk ideas because I wasn’t too bothered about the consequences of them going wrong, which of course some of them did. Infloresence was one of these high-risk ventures, a hybrid system that attempted to merge gestural control with coding, with the (probably flawed) idea of using something more expressive than buttons to make code. The controller was a collection of 6-axis motion sensors mounted on animator’s wire (Kiefer 2015b), feeding out a large state vector that was turned into code using genetic programming techniques. The functions called from within this generated code could be livecoded, meaning that the system could be used gesturally but also programmed from the keyboard/touchpad. The instrument was at the same time highly intuitive and hugely difficult to play due to the massively non-linear state space of genetic code generation, but when it was programmed with decent constraints it was quite addictive to use. As a performance tool, I found it difficult to switch between gestural control and typing code, to focus attention from one to the other. I didn’t rebuild this instrument after it eventually fell into disrepair, but carried on using genetic programming techniques in other forms: for automatic generation of visuals, mapping from audio features to GLSL shaders, and for a browser-based instrument called ApprogXimate Programming (Kiefer 2015a; Kiefer 2016) which allows coding with either text or GUI sliders. This latter system works well by confining the interface to mouse and keyboard. I haven’t performed with this but have really enjoyed using it for sound design.

Infloresence’s experimental mix of controllerism and code, while quite challenging, was also enlightening and I tried to do the same in more constrained circumstances. I used a small coding system within a wider collection of controllers by livecoding GLSL shaders in an audiovisual feedback system. I also tried the other end of this spectrum, exploring a mainly text-based setup with a set of extra controllers that were streamed into SuperCollider using MIDI and serial connections to an Arduino. These streams were inserted into livecoded processes using a library designed for quick mapping of control streams. For this performance, I attempted to follow the livecoding ethic of showing your inner thought process, by wearing a webcam on my head and projecting the image of the controllers-in-use next to the code. This setup was in some ways really satisfying to play, but ultimately didn’t feel right for performance. Aside from looking really quite daft, building up the mappings along with the sound felt too slow and the performance dragged; I wasn’t sure if I was programming a controller or controlling a program. I still however use this system for composition, outside of performance-time (and without the webcam)!

These few experiments in mixing controllerism with code ultimately didn’t have longevity, but they did have a huge bearing on a current project, the feedback cello. This instrument is a hybrid acoustic-digital system, consisting of a cello with mounted transducers and pickups. They were built in 2016, based on Halldor Ulfarson’s Halldorophone (2018). Sound from the strings is processed digitally and replayed through the cello, creating a feedback system which can be controlled very sensitively by playing or damping the cello strings. It can also be played through the digital part of the feedback loop, which in this case happens in SuperCollider. When I first performed with this (as half of Feedback Cell (Eldridge and Kiefer 2016), the main way of engaging with the instrument was livecoding. I chose this partly because I’m not a trained cellist and it felt more comfortable playing SuperCollider, and partly because code seemed the most natural way to express very precise and subtle interventions in the feedback loop, where micro-scale adjustments have emergent macro-scale consequences. In performance, I sat at my laptop and livecoded the feedback loop, while also reaching out to the cello which was stood next to me, controlling the feedback by damping the strings. This setup was surprisingly intuitive to use, and it seemed to work well because there was an easy, slow balance between gesture and text. Ultimately, I’ve shifted away from this setup during performance, because there are now mounted controls on the cello that map into SuperCollider and I wanted to play the instrument in full. However, this livecoding-gesture hybrid approach has been absolutely essential for composing mappings and signal processing within a workflow that fits the instrument.

Figure 1. Live Coding a Feedback Cello at xCoAx, Bergamo. Credit: Pedro Tuela (2016).

Now when playing algoraves, I have veered over to a mostly text-based approach, with modifications to the editor environment to bring in meta-control of the code such as triggered quantised queues for running new code updates and some touchpad control. It seems that I’ve polarised to gestural control with a hint of livecoding, and livecoding with a hint of gesture. Trying more balanced combinations, apart from some exceptions, didn’t work in an intuitive way; this might come down to a question of matching timescales of interaction, and making the instrument feel as an integrated whole. Looking wider, in and outside of performance, livecoding has become my default way of engaging with music and computers. This may be because livecoding performance has made the way I think about music more algorithmic. It may also be because, compared to other tools, it offers an expressive and creative way to work that is closer to the machine.

Author Biography

Chris Kiefer is a computer-musician, musical instrument designer and Lecturer in Music Technology at the University of Sussex, where he is a member of the Experimental Music Technologies Lab. He performs with custom-made instruments including malleable foam interfaces and hacked acoustic instruments. As a live-coder he performs under the name Luuma. Most recently he has been playing an augmented self-resonating cello as half of improv-duo Feedback Cell, and with the newly formed feedback-drone-quartet Brain Dead Ensemble.

Email: <c.kiefer@sussex.ac.uk>

Web: <http://luuma.net>

References

Eldridge, Alice and Chris Kiefer. 2016. “Continua: a resonator-feedback-cello duet for live coder and cellist”. In Proceedings of the 4th Conference on Computation, Communication, Aesthetics and X, 398–401. Bergamo: xCoAx.

Kiefer, Chris. 2015a. “ApProgXimate Audio”. <http://approgx.luuma.net> (accessed 26 June 2018).

Kiefer, Chris. 2015b. “Approximate Programming: Coding Through Gesture and Numerical Processes”. In Proceedings of the First International Conference on Live Coding, 98–103. Leeds, UK: University of Leeds. <http://doi.org/10.5281/zenodo.19340>.

Kiefer, Chris. 2016. “ApProgXimate Audio: A Distributed Interactive Experiment in Sound Art and Live Coding”. International Journal of Performance Arts and Digital Media, 12(2): 195–200. <http://dx.doi.org/10.1080/14794713.2016.1227599>.

Úlfarsson, Halldór. 2018. “The Halldorophone: The Ongoing Innovation of a Cello-Like Drone Instrument”. In Proceedings of New Interfaces for Musical Expression, 269–74