“In the manner of Beethoven’s quill”: Composers, computers, and the redefinition of music as an art form and practice

Did You Know?

  • Luther College offers year-round campus and residence tours as well as one-on-one enrollment counselling.

  • The Luther Library has over 18,000 items in its collection, 3,000 books checked out per year, and 6,000 students who come through its door per month.

  • Luther College offers the best of both worlds: a smaller college environment with all the benefits of a larger university.

  • Luther students can register in Arts, Science, or Media, Art, and Performance degree programs. All degrees are awarded by the U of R.

  • Luther College opened the first residence on campus in 1971, and is still a “home away from home” to students: meals, laundry, and lifelong friendship included.

  • Luther College students pay the same tuition and fees as other University of Regina students.

  • Luther College offers Bundles and Bundles Plus programs! Bundles and Bundles Plus are groupings of courses hand-selected by our academic advisors to help set new students up for a successful first semester.

  • Luther students enjoy personalized one-on-one academic advising: our academic advisors are here to help you from registration to graduation.

“In the manner of Beethoven’s quill”: Composers, computers, and the redefinition of music as an art form and practice

By Jason Cullimore

Introduction
Many composers today have integrated computer hardware and software into their creative processes. How does the rapid evolution of technology impact upon the the relationship between the composer and the tools they use? Fifty or more years ago, readers might have assumed that a discussion of composers’ tools would include consideration of musical instruments, tape recording devices, pens and ink. A topic of such scope might appear to be rather trivial, since in order to appreciate a piece of music, one does not normally feel the need to inquire as to the type of quill that was used to write it down. Since the 1950's, however, when the first musical works realized using computers were developed, innovations in computer hardware and software design have transformed the way in which music is produced, and computers have become an important—for some, irreplaceable—feature in many composers’ creative processes. Yet unlike a simple quill, tools such as computers can act as more than simple extensions of the human body. I would argue that they can be extensions of our mental processes as well, including those involved in our most profound creative activities. In the case of some technologies, they may even function as an autonomous substitute for some of the composer’s own actions.

Taken to an extreme, the computer can now alter a composer’s musical score during the performance of a work, sensing its audience’s responses as the music unfolds and restructuring the score so as to intensify or redirect the listener’s experience of the piece. Could Mozart today rewrite one of his masterpieces to respond to the tastes of contemporary audiences? Even if he had wished to do so, his spirit and agency have long since left this world. By contrast, a computer and its software may be duplicated, upgraded and reconstructed ad infinitum. The composer today has the option of embracing new expressive techniques offered by computers and exploring the intriguing creative possibilities they offer. Yet when embracing tools that enable new approaches to producing and experiencing music, the following questions arise relating to how the integration of technology will impact the future form and role of music:

  1. Does the integration of computer technology into music composition processes enrich the art form, or damage it?
  2. Ultimately, is musical sound generated by a computer, or presented in ways that challenge traditional norms of music creation (for example, by introducing computer mediated interactivity or chance into the composing process), even truly definable as music the way we currently understand it?

 

What music is, and is not
To discover the nature of computer music, it is necessary to understand that which constitutes a musical work. In his influential 1980 essay “What a Musical Work is”,[1] Jerrold Levinson argues that a (classical Western) musical work must satisfy three criteria in order to achieve the status of a discrete composition:

  1. It must be brought into being by the activity of the composer;
  2. It must be uniquely identified with its individual composer;
  3. It must involve clearly-defined performance means.

As an example, let us briefly consider Ludwig van Beethoven’s Hammerklavier piano sonata, written in 1818, through Levinson’s lens:

  • Beethoven (1770 –1828) conceived and recorded its form, and through this act enriched Western classical music;
  • We associate the work with Beethoven, and only Beethoven. If the piece had been created in identical form by another composer, perhaps by Claude Debussy one hundred years later, this other Hammerklavier sonata would not possess the same meaning for the listener that Beethoven’s version does. It would likely be seen as an aberration within Debussy’s oeuvre, diminishing its impact, whereas it is considered a significant and in some ways representative work in Beethoven’s;
  • The Hammerklavier sonata achieves its success partly due to Beethoven’s choice of performance means, the piano. The physical straining of the piano during a thunderous performance of the sonata is integral to the effect of the work. The same piece played on a polyphonic synthesizer, which cannot strain physically, would lose this vital aspect.[2]

The Hammerklavier sonata is thus a distinct piece of music that satisfies Levinson’s definition of a “musical work”, and no-one before or after Beethoven could have done the same.

Let us now consider the case of a piece of interactive music where its designer has used a computer to enable the music to interact with its audience. Does Levinson’s definition hold up in relation to such a work? An example may prove useful, so for the sake of argument, I have chosen Bloom, a generative and interactive musical experience created in 2008 by Brian Eno and Peter Chilvers that takes the form of an app for iOS based smartphones.[3] In Bloom, the user touches the screen and the app responds by playing musical notes, with pitch and rhythm roughly determined by the area of the screen the user touches. Bloom also plays a continuous “pad” sound as a backing track, an evocative synthesized texture. Because of the manner in which Bloom is designed, all notes that are played by the user will conform to the harmonies implied by the pad sound (and by each other). The user will find local areas of greater consonance or dissonance, but the overall effect is always soothing and beautiful. Significantly, it is impossible for a user, even a musically untrained one, to play a note that does not fit with the harmonies being generated by the app at any given time. This feature reflects aesthetic choices made by the app’s designers.

Individuals listening to music produced by this app – but unaware of its computer generated origins – might assume that they are listening to a discrete musical work, sparse and meditative, but composed of carefully-chosen notes that were recorded precisely because they achieved such an effect. Were this true, the music of Bloom would be fundamentally no different from Beethoven’s Hammerklavier sonata as described above.

But who is the composer of the music of Bloom? The user? Eno and Chilvers? Given that Levinson holds that a musical work has to be brought into being at some point by the actions of a composer, and if a musical score produced by Bloom is only fully realized through its interactions with a user, then defining Bloom as a musical work would be problematic for Levinson. Bloom as a musical structure comes into being interactively, though its responses to the touch of a user. It is created and recreated every time the app is loaded. In other words: all persons who interact with it take on aspects of the composer’s role, for they are defining elements of the structure of a score that in the Western classical idiom are normally in the domain of a composer.

As a tool of creative composition, however, Bloom can lack flexibility, I would argue. While it does respond to user input, it uses that input to instantiate musical ideas of limited emotional range: textures, harmonies, and melodies that together evoke moods the user cannot alter significantly. Bloom, as a creative tool, can often fail to transmit the more divergent ideas of its user. If, for example, users wish to craft a chaotic, angry theme, they will almost certainly fail because this effect is simply incompatible with Bloom’s programming and interface. It is impossible to write a Hammerklavier sonata with Bloom, therefore, and the lack of such freedom would surely have infuriated Beethoven, were he restricted to using this app to realize his ideas.

The music of Bloom thus represents a different category of artistic creation from the familiar classical score: it has as many composers as it has users, and its music may be brought into being endlessly, never achieving a point of ultimate completion or of established structural definition. To understand what Bloom is, one must recognize that it is not music, a score or a composition. Instead, it must be viewed as an environment through which musical ideas may be generated and explored by its user and designers. Some might argue that Bloom is an artwork, as the Hammerklavier sonata is, or a painting of a landscape. But in my opinion it is also a super-category of artistic work implying a multitude of possible musical structures, all of which can be brought into being through the action of interacting with Bloom, and each in itself might be considered a distinct musical composition. 

 

The computer as a creative tool
If we accept that computer music systems which incorporate a generative or interactive aspect represent a qualitatively different category of musical art form, then we might ask what this means for the composer, the listener, or the software designer. There are hundreds of musical cultures on Earth, from which many emotionally meaningful masterpieces have resulted. Why would composers want to incorporate the new technologies of computer music into their creative activities? Can a computer be more than a “crutch”, replicating simple thought processes in which a composer does not want to engage (or does not have the means to engage)? Is there something different about the computer compared to other, more traditional compositional tools?

One answer invokes the idea of agency, the ability to act in the world. Composers possess agency with regard to their own compositions – a composer actively choses the structure of the music and its constituent notes, timings, dynamics, and instrumental colours. But does the listener have agency over the composer’s composition as well? John Cage’s landmark “silent” musical work 4’33” (1960),[4] can provide a relevant perspective. In 4’33” the pianist is instructed to sit at the piano, but make no sound. By omitting audible events in the score, Cage (1912 –1992) places the responsibility for constructing the musical work and its meaning upon the audience. Its status as a musical work requires the engagement and affirmation of its audience if it is to have substance.[5] Cage himself broadly advised that “we must arrange our music… so that people realize that they themselves are doing it, and not that something is being done to them.”[6]

When considering the psychophysical and psychological processes by which humans encounter and appreciate music, it becomes evident that that music is actually a reconstruction, by the brain, of simple fluctuations in air pressure levels.[7] When graphed in their essential waveform image, these fluctuations carry little or no meaning whatsoever. The listener’s brain must reassemble the music so that it might understand it, extracting it from a simpler signal. It follows that if the music has meaning, that meaning must be assembled as well. Thus the listener is always a participant in the act of making a composition, even if that work is the Hammerklavier sonata, and its actual composer has nothing more to contribute.

Once we acknowledge that the listener possesses agency with regards to the act of creating a piece of music, the concept of interactive or generative computer music should seem less alien. After all, a work like Bloom simply allows listeners to choose (however approximately) the notes that they wish to hear, based on their understanding of how they would like the music to unfold. Instead of simply thinking “this piece is becoming too boring and repetitive”, the listener can act on such a judgement and attempt to redirect the piece towards a more satisfying direction – a few quick taps on the iPad screen can accomplish this. With an intuitive interface at his or her fingertips, the listener’s ability to affect the unfolding of a musical work may appear quite natural and transparent. In other words: the listener simply behaves, and the music responds.

For the composer, however, the advent of computer music presents an entirely different set of implications. For one, the traditional Western classical composer is understood to be primarily responsible for the score of his or her own composition. The computer, on the other hand, can be programmed to manipulate the score of a piece of music. Take David Rokeby’s Very Nervous System,[8] for example. In this piece the listener can instantiate musical figures, motives, percussive hits and any other sonic element with simple gestures. A composer in the traditional sense is not needed. Likewise, David Cope’s Experiments In Musical Intelligence (or EMI)[9] software can, with a sufficiently formatted and detailed database of works by any one composer, produce music that replicates his or her style.[10] In one experiment carried out in 1997, musically trained listeners were fooled into believing that a work by EMI was, in fact, composed by Johann Sebastian Bach (1685–1750), so convincing was EMI’s emulation of the Baroque master’s style.[11]

These examples illustrate how computer music systems may be designed in such a way as to simulate the human agency of a composer at the expense of the composer. With sufficient development and a little imagination, it is possible to conceive of works produced with EMI that sound like Bach, Beethoven, the Beatles, or David Cope himself. In comparison, Bloom can produce scores that are relaxing and compelling, channeling the spirit of co-creator Brian Eno’s seminal ambient works. Many other new systems are also being devised that attempt to replicate or replace processes that are traditionally in the domain of the composer. One of the more ambitious is the Algorithmic Music Evolution Engine[12] (or AMEE) which its designers describe as an “assembly line for constructing musical blocks.”[13] AMEE features libraries of musical motives and other structures that are based on specific composers’ commonly-used devices, and its own creations. An easy-to-understand interface allows the user to direct the emotions underlying the music, as well as the structures that make it up. What does this mean? Although it started out as a tool for composers allowing them to explore new synthetic colours and to record their ideas for posterity, the computer has begun to encroach on many aspects of composers’ creative activity.

As a result, composers – myself included – may soon face an existential crisis and ask difficult questions, such as: will there always be a need for a human composer’s artistic agency? Are composers reaching a point where computers will be able to work so deftly with music structure that their listeners would find a human-composed score simply unsatisfying, or redundant?

 

Conclusion
I would argue that while the computer may be a tool, its power is such that its continued development may call into question the future relevance of music composed by humans. In this case, the tool becomes more than its maker, and eventually casts him or her aside. This is a frightening prospect for the composer, but one that has been faced many professionals, from the auto assembly line worker replaced by robots to the chess grandmaster defeated by IBM’s Deep Blue computer.[14]

Yet it is important to remember that people still play chess, and composition can remain a worthwhile human activity even in an era when computers can convincingly emulate Bach. In my opinion, the question is not whether people should compose, for those who have something to communicate through original music will always be driven to create it. The question instead is: what manner of machine should serve as the composer’s tool?

The computer is only as capable as the programming that drives it, I would argue, and it is here that composers may find solace and a way forward. If they use the software they create to magnify their own musical ideas, to transmit their agency through the machine instead of allowing it to substitute for them, then the result can be human music, not the cold, thoughtless result of algorithmic computation. In this case, the computer must be seen specifically as a tool, rather than as a crutch that makes compositional tasks that are impossible practical and easy for a given composer. Composers should recognize that their art involves struggle, and that this struggle can be heard in their compositions, for example in how the piano strains to ring with the thunderous notes of the Hammerklavier sonata. In fact, this strain and struggle is, in my opinion, the most human aspect of musical composition – to diminish it is to diminish the music itself.

Thus, if the computer is to remain a tool that benefits human composers, I would argue the following: it must exist to be fought with; it must require mastery; and it must not grant one an easy means to achieve its effect. For if it is easy, the machine could do it, and probably will. Instead, think of it more along the lines of channeling the spirit of Beethoven in the design of computer music tools. Beethoven fought with his tools, from pianos that were too fragile to withstand performances of his music to the deafness of his own ears.[15] The result of these struggles was transformative music that helped to launch a new, vibrant era of Western classical music. Beethoven did not need a computer to help him realize his vision – he could win his hard-fought battles with a simple quill and his imagination. To sum up: if a computer music system is to become a compositionally meaningful tool, it must serve to transmit the composer’s imagination in the manner of Beethoven’s quill, i.e. transparently conveying the voice and struggles of the composer, but admitting no aid to the lazy or unprepared.



[1] Jerrold Levinson, "What a musical work is," The Journal of Philosophy 77, no. 1 (1980): 5–28.

[2] Ibid.,17.

[3] Bloom may be downloaded from the Apple iOS App Store.

[4] Since its arrangement is not instrument-specific, Cage’s 4’33” has been performed by a wide variety of soloists and ensembles.  Here is a version for piano, performed by Armin Fuchs.

[5] Norbert Herber, “Musical behaviour and amergence in technoetic and media arts,” in The Oxford Handbook of Interactive Audio, eds. Karen Collins, Bill Kapralos, and Holly Tessler (Oxford University Press, 2014), 364.

[6] John Cage, quoted in Roy Ascott, Telematic embrace: visionary theories of art, technology, and consciousness (Los Angeles: University of California Press, 2003), 123.

[7] For an introduction to the psychology of music, see Daniel J. Levitin, This is your brain on music: Understanding a human obsession (Atlantic Books Ltd., 2011).

[8] Here is a performance of music using the Very Nervous System, featuring Rokeby himself.

[9] David Cope, "Computer modelling of musical intelligence in EMI,” Computer Music Journal 16, no.2 (1992): 69-83.

[10] The David Cope YouTube channel contains numerous examples of EMI’s work can be found here.

[11] George Johnson, “Undiscovered Bach? No, a computer wrote it,” New York Times, Nov. 11, 1997.

[12] Maia Hoeberechts, Jeff Schantz, and Michael Katchabaw, “Delivering interactive experiences through the emotional adaptation of automatically composed music,” in The Oxford Handbook of Interactive Audio, eds. Karen Collins, Bill Kapralos, and Holly Tessler (Oxford University Press, 2014), 419-42.

[13] Ibid., 428.

[14] Murray A. Campbell, Joseph Hoane Jr, and Feng-hsiung Hsu, "Deep Blue,” Artificial intelligence 134, no. 1 (2002): 57-83.

[15] Jan Swafford, Beethoven: Anguish and Triumph (Boston: Houghton Mifflin Harcourt, 2014).