Limits in Algorithmic Dance Music
A friend and I have been discussing what is lost in algorithmic music. Without having personally used an algorithmic music system, he proposed tentatively that there might be more possibility or fewer limits in a human sitting behind a piano than when using an algorithmic music system. Having used algorithmic music systems for over 20 years and having also studied piano, I felt intuitively this was not the case. Possibly most obviously, the timbre of a piano (even when prepared) is limited compared to what a computer music language can produce. A person sitting at a piano lacks the endurance, creativity, and speed to execute the range of patterns which can be generated by algorithmic systems; the limitation is more stark if you consider the necessity for the human piano player to practice. The immense number of immediately realizable possibilities in algorithmic systems is at a completely different scale than those of a piano. For example, after booting my system, I immediately have access to thousands of rhythm patterns, which an extremely creative individual would struggle to produce even a small percentage of in a reasonable time.
Still, algorithmic music has limits, particularly algorithmic dance music. The limitations of the genre might be construed as becoming fewer, but these exist, particularly in terms of audience perception of danceability. Personal interaction with audiences tells me that different segments of the audience simultaneously want algorithmic dance music to go beyond the sounds provided by other dance music styles and also want it to conform to some timbral or formal characteristics of contemporary dance music; if this music is meant to be consumed by a dance floor audience, taste must be accommodated. Maybe all that can be done to overcome these limitations is to present music at the borders of the norms and hope that they are stretched to some degree.
Beyond the limits described above, I feel some technical limits:
the interface: Several issues impair the usability of algorithmic systems, such as the ease with which novel patterns can be entered, the speed with which large numbers of parameters can be set and the simplicity of entering new algorithms for the system to use (Blackwell and Collins 2005).
the algorithms used to generate patterns: While existing tools can generate a wide variety of patterns, the difficulty of making new and interesting algorithms remains no different from that of a pianist trying to generate new material at a keyboard.
synthesis techniques: It is difficult to find the right balance between a system which is complex enough to provide the widest possible variation in timbre, while simultaneously simple enough to be manipulated by a single user in real-time (Tolonen et al. 1998).
coordination between systems: While it is simple for two musicians on traditional acoustic instruments, such as a drummer and a pianist, to play together, there are greater technical hurdles to bring two live algorithmic music performers into the same degree of sync that we perceive the traditional musicians to be achieving.
To overcome some of these limits, I am pursuing the following directions:
elegant expression of core system functions: The interface of a live coding system needs to be efficient for a performer to produce changes which come fast enough to entertain audiences. I am trying to optimize the interface of the core functions of my algorithmic system to improve their usability.
handling parameters and processes with agents: With the number of simultaneous processes and parameters being able to quickly exceed those of traditional instruments, the algorithmic performer needs additional tools to handle them or needs a simpler interface. Agent processes, meaning autonomous systems within an environment that sense that environment and act with purpose over time (Franklin and Graesser 1996), seem from personal research to make this easier if an appropriate interface is also provided. I am exploring different types of agents and means for linking the agents for coherent changes across the behaviour of all agents (or a subset of agents). One such agent in my system, Conductive, observes rhythm patterns and changes them based on two observed factors: the length of time since the last change of base pattern (loosely, its "boredom") and the "boredom" of other agents in the system.
connecting systems through OSC: I am working towards being able to easily link two systems for deeper and more meaningful improvisations. For example, a pianist seldom receives a truly direct influence from a drummer. The pianist listens and possibly modifies their playing in response to the drummer, and vice versa, but algorithmic systems open the possibility for one performer to actually manipulate a co-performer's system in superficial or very deep ways. Taking cues from similar efforts such as Benoitlib (Borgeat 2010) and Republic (2016) (both for SuperCollider), I am working on tools using OSC to send and receive messages to other systems and then providing responses to received messages that change the state of the system, including the algorithms at the core of the system.
generating algorithms for generating rhythms: Unfortunately I am not prepared technically for this advanced topic, but I would like to reach this point.
Successfully dealing with these limitations will increase the sense of freedom I feel when using my system, and hopefully my own progress can contribute towards expanding the range of possibility within the field of algorithmically-produced dance music.
Tokyo-based Renick Bell improvises bass-heavy algorithmically-generated music full of percussion and noise by live coding with open source software, including software he has written called Conductive. He has just released an album on Rabit’s Halcyon Veil label and released an EP on Lee Gamble's UIQ in late 2016. His music practice corresponds with a research practice of writing software and writing research papers on live coding, electronic music and art.
Blackwell, Alan, and Nick Collins. 2005. "The Programming Language as a Musical Instrument". In Proceedings of the 17th Workshop of the Psychology of Programming Interest Group (PPIG05), 3: 120-30. Brighton: University of Sussex.
Borgeat, Patrick. 2010. "BenoitLib: SuperCollider Extensions Used by Benoît and the Mandelbrots". GitHub.com. <https://github.com/cappelnord/BenoitLib> (accessed 3 August 2018).
Franklin, Stan, and Art Graesser. 1996. "Is it an Agent, or Just a Program?: A Taxonomy for Autonomous Agents". In International Workshop on Agent Theories, Architectures, and Languages, ed. Jörg Müller, Michael Wooldridge and Nicholas Jennings, 21-35. Berlin: Springer. <https://doi.org/10.1007/BFb0013570>.
supercollider-quarks. 2014. "Republic: Simplify Synchronisation of Networks and Make it Easy to Join and Quit a Running Session". GitHub.com. <https://github.com/supercollider-quarks/Republic> (accessed 3 August 2018).
Tolonen, Tero, Vesa Välimäki and Matti Karjalainen. 1998. Evaluation of Modern Sound Synthesis Methods. Espoo: Helsinki University of Technology.
 In this context "an autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future" (Franklin and Graesser 1996).