Trevor On Sonic Art
Trevor On Sonic Art
Trevor On Sonic Art
Real-Time Computer-Aided
Composition with bach
Andrea Agostini & Daniele Ghisi
Published online: 18 Apr 2013.
To cite this article: Andrea Agostini & Daniele Ghisi (2013) Real-Time Computer-Aided Composition
with bach , Contemporary Music Review, 32:1, 41-48, DOI: 10.1080/07494467.2013.774221
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Contemporary Music Review, 2013
Vol. 32, No. 1, 41–48, http://dx.doi.org/10.1080/07494467.2013.774221
Real-Time Computer-Aided
Composition with bach
Andrea Agostini and Daniele Ghisi
Downloaded by [Columbia University] at 15:22 02 February 2015
Introduction
The relationship between music and computation is extremely well known, and it is
indeed one of the basic conceptual tools in the understanding and speculation about
music itself. The highly refined, quasi-algorithmic systems developed by Flemish com-
posers; the combinatorial complexity of dodecaphonic and serial music; the calcu-
lation of the frequency contents of sound, as a starting point for the spectral
movement of composers: these are only few examples of how computation can be
an invaluable tool for music composition. Moreover, other closely related domains,
such as musical analysis, ethnomusicology and psychoacoustics, make an extensive
use of computational tools.
It is not surprising that, since the advent of computers, there has been a great interest
on how to take advantage of the superior precision, speed and power of electronic
computers in music-related activities. Probably the best-known (and commercially
successful) direction has proven to be the generation and transformation of sound.
ming paradigm actually reflects, and is a metaphor for, a wide range of non-computer-
related experiences, such as the behaviour of the mail system—at the moment we bring
a parcel to the post office, we are confident that the correct chain of operations (whose
details we neither know, nor care about) will be performed by a correspondingly struc-
tured chain of humans and machines, eventually leading to the delivery of the parcel in
some remote part of the world. Or, the mechanics of a piano, where the pressure on a
key immediately moves a chain of mechanical devices that eventually, in a measurable
but usually negligible time, set the corresponding strings in motion so that we can hear
the resulting sound. Or, again, a traditional sound recording, mixing and amplification
chain, in which the sound entering the chain is transformed into electrical signals,
added to other sounds, amplified, filtered and eventually re-transformed into sound.
It should be noted that the two latter examples are particularly relevant to Max, as
the ‘musical instrument’ and ‘mixing chain’ metaphors have informed its very con-
ception, and still inform its current development.
Although the graphical interfaces of the PatchWork (Laurson & Duthen, 1989),
OpenMusic (Assayag, Rueda, Laurson, Agon, & Delerue, 1999) and PWGL (Laurson
& Kuuskankare, 2002) look quite similar to that of Max, their programming paradigm
is deeply different, as the entry of the data does not trigger any immediate reaction: in
these systems, an explicit evaluation command must be given to the machine in order
to perform the desired operation. The difference is less subtle than it might appear,
both from the conceptual and the practical point of view. Actually, the evaluation
command is just a key pressure, or a mouse click, but whereas this single action
might not be very critical in a non-real-time context, it becomes crucial when synchro-
nicity matters, or when the data flow comes, for instance, from a MIDI keyboard, or a
microphone, or a video camera. The non-real-time approach is thus unable to keep track
of a stream of incoming events properly, be they a sequence of keys pressed by a player or
a series of instructions guiding the composer’s thought.
As a final note on this subject, it should be remarked that the differences in the user-side
behaviours of these paradigms actually reflect totally different underlying structures, and
the rift between them is far deeper than it might appear. In this sense, bach is essentially and
structurally different from all the major existing CAC environments (see Figure 1 for a
44 A. Agostini and D. Ghisi
Downloaded by [Columbia University] at 15:22 02 February 2015
Figure 1 A Comparison between an OpenMusic Patch (left) and a bach Patch (right)
Performing the Same Process (Creation of a bisbigliando Around a Base Note). In the
OpenMusic Patch, the Resulting Score is Left Untouched Until it is Re-evaluated. In the
bach Paradigm, as One Changes a Parameter, the Result is Updated.
comparison example). The consequences of this real-time approach to CAC, from a musi-
cian’s point of view, will be further explored in the following section of this article.
Figure 2 A Patch Used by Andrea Agostini to Manage the Electronic Score for a Film
Music. Each Note Contains a Set of Instructions for a Synthesizer, Expressed as Text Com-
mands and Graphical Marks. When the Score is Played, All the Information Connected to
Each Note is Sent to the Appropriate Synthesizer Voice, Represented on Screen by the Note
Color. Below the Score, Some Frames of the Movies are Shown, Providing a Visual Refer-
ence to the Position of the Musical Events. In a Separate Window, Not Shown Here, the
Frame Corresponding to the Exact Position of the Play Cursor (the Thin Vertical Bar) is
Shown in Real-Time, Allowing Fine Control Over the Sound and Image Synchronization.
46 A. Agostini and D. Ghisi
Downloaded by [Columbia University] at 15:22 02 February 2015
Figure 3 A Patch, Used by Daniele Ghisi, to Achieve Real-Time Symbolic Granulation. The
Original Score (Upper Window) has Some Markers to Determine and Modify the Grain
Regions. Parameters are Handled in the Lower Left Window. When the User Clicks the
‘Start Transcribing’ Button, the Result Appears and Accumulates in the Middle
Window. When Desired, One may then Make it Monophonic (if Needed), Refine it,
and Finally Quantize it. Every Parameter is User-Modifiable and Affects in Real-Time
the ‘Rough’ Result, as in Any Electroacoustic Granulation Machine.
Contemporary Music Review 47
On the other hand, it is worth underlining that the real-time paradigm is a resource,
rather than an obligation: score changes are handled by the user, who is completely free
to make them happen immediately or only after some ‘refresh’ operation (as would be
the case in non-real-time environments). This means that, in principle, nothing pre-
vents the user from using bach as any other CAC environment, and there are cases
(such as a fine rhythmic quantization) in which one is obliged to settle in the non-
real-time paradigm, since the significant amount of time needed for performing a par-
ticular task might actually disrupt the immediacy of response to the user’s actions.
Some Examples
We give some screenshot examples of how the performative and speculative aspects
Downloaded by [Columbia University] at 15:22 02 February 2015
can be unified, convinced that this might show, better than any words, the possibilities
Figure 4 A Patch Performing Real-Time Symbolic Transposition, Frequency Shift and Fil-
tering. As the User Changes One of the Envelopes, the Underlying Score is Updated with
the New Filtered Values (for Instance, Notice that as the Low-Cut Frequency Increases, the
Notes Gets More and More Rarefied). The Patch is Obtained by a Close Interaction of bach
Objects (such as the Underlying Score) and Standard Max Objects (such as the Breakpoint
Functions Used for the Envelopes).
48 A. Agostini and D. Ghisi
of the new model. The first example (Figure 2) shows that the boundary between
sequencers and scores is no longer rigid: a score can be a customizable sequencer,
whose content is completely open to any real-time process the user might want to
realize. Notes carry extra information, specifying the parameters for the processes
which will concern them. At the same time, thanks to the possibility to retrieve in
real-time all the information related to the details of the graphical display of the
score, it is straightforward to keep a video sequence constantly aligned to the
musical score. In this way, when working with a video, one can always be aware of
which video frame each musical event is synchronized to.
In the second example (Figure 3), we set up mechanisms to apply granulation (a
typical electroacoustic treatment) to symbolic data. An original score is used as a
reading buffer, where granulation regions are defined; the result is immediately
Downloaded by [Columbia University] at 15:22 02 February 2015
visible in a second, constantly growing score, and is affected in real-time by any par-
ameter change. In the last example (Figure 4), we set up a system to have counterparts
to typical audio-domain transformations, such as transposition, frequency shifting and
filtering, applied in real-time to symbolic data.
Acknowledgements
We are deeply grateful to the following people for their precious support and advice: Carlos Agon,
Arshia Cont, Eric Daubresse, Emmanuel Jourdan, Serge Lemouton, Jean Lochard and Mikhail
Malt. We also wish to thank DaFact for actively sponsoring the development of bach.
References
Assayag, G., Rueda, C., Laurson, M., Agon, C., & Delerue, O. (1999). Computer assisted composition
at Ircam: From PatchWork to OpenMusic. Computer Music Journal, 23(3), 59–72.
Cont, A. (2008). Modeling musical anticipation: From the time of music to the music of time (PhD
thesis in Acoustics, Signal Processing, and Computer Science Applied to Music (ATIAM),
University of Paris 6 (UPMC), and University of California San Diego (UCSD) (joint),
Paris, 2008).
Laurson, M., & Duthen, J. (1989). Patchwork, a graphical language in preform. Proceedings of the
international computer music conference (pp. 172–175). Ann Arbor: International Computer
Music Association.
Laurson, M., & Kuuskankare, M. (2002). PWGL: A novel visual language based on common lisp, CLOS
and OpenGL. Proceedings of international computer music conference, Gothenburg, Sweden,
pp. 142–145.
Puckette, M. (2004). A divide between ‘compositional’ and ‘performative’ aspects of Pd. First internation
Pd convention, Graz, Austria.
Seleborg, C. (2004). Interaction temps-réel/temps différé. Marseille: mémoire ATIAM.