for Point of Departure, October 2010
Can you describe your technological set ups, as well as the controllers you use (and how they work)? (Does not need to be intolerably specific, but clear enough to understand the structure of your processing, from samples (or live sources) to what we hear, and the control of it in real time.)
We should start by saying that our first priority is to communicate with people through sound, so it can be distracting and misleading to talk about the technology as if it were at the centre of what we do. Our setup consists, at the moment, of two MIDI keyboards, and a few other controllers, controlling sample playback software on three computers of which each of us has access to two. So the situation involves characteristics both of one instrument and of two, being played by both one person and two, and that’s more central to what we think about than the specific technology used to bring it about.
As I understand it, much of your work is sample based, as opposed to synthesis generated or live input manipulation. Sometimes the heard sounds are so manipulated as to have no identifiable source, but sometimes not. How do you both think about this notion of source-bonding, is this a line you play with in performance purposefully? If so, can you give a little insight into some of your choices in that regard.
We’re not opposed to using synthesis or live processing, but again our starting point isn’t the setup but the sounds, which can be found and formed by many different processes. The identifiability or alienness of sounds can be an important part of their attractiveness, or repulsiveness, and we’re interested in working with the fullest possible range between allusion and abstraction.
The notion of performance. On stage you sit behind racks of gear that separate you from the audience. But you can be very gesticular and engaged when performing. This strikes me as somewhere between the anti-performance stance of some laptop artists and the more gestural performance concepts of a place like STEIM and Michel Waisvisz. How would you describe your performance approach on this continuum (even though I am sure this performance style developed over the many years of your collaboration, most likely without trying to ‘place it on this continuum’)?
We most definitely don’t take an anti-performance stance; the racks of gear take up a lot less space than a grand piano, for example, and we’re actually facing towards the audience which a pianist often isn’t. Sitting at a laptop appearing to do nothing is presumably some kind of ironic stance, and FURT is not ironic. Whatever gestural aspect there is to our playing is motivated principally by getting physically involved with the sounds (rather than the technology), establishing a positive feedback where certain sonic characteristics (short percussive sounds, to take an obvious example) might evoke certain physical approaches to the keyboard in particular, which in turn enhance the sonic characteristics, and so on, analogously to an acoustic instrument. Another main reason for it is the kind of musical-gestural communication through peripheral vision which is important, for example, in classical chamber music or, maybe more relevantly, the way a jazz rhythm section works. If at the same time it creates a kind of presence which closens the music’s relationship to an audience in a live performance, that’s an added benefit.
How do you manage structure in your duet? How do you start and end a piece and what role does the notion of cadence play (or not play) in FURT? Can there be a non-tonal cadence? Is there a way to signify the continuity of experience when we have been so conditioned to expect finality? Is there a way to start and stop without beginning and ending?
Managing structure is something we try to assess freshly for each new stage in our work, and it’s a work constantly in progress. Part of this is indeed the development and expansion of our musical syntax, one aspect of which could be described in terms of cadences. Musical syntax clearly doesn’t depend only on cadences, though, since many musical traditions don’t have them, and cadences don’t depend on tonality. Gamelan music, for example, uses cadence-like forms but isn’t at all tonal. Anyway, such syntactic elements then serve to create a background, perhaps an illogical one, FURT’s “logic”, which we can then work in counterpoint against. Structural turning points or cadences often follow each other very rapidly in FURT rather than just signalling beginnings and endings. Some of our pieces have clearly-imagined starting and ending points prior to performance while others don’t. For us, some of the most exciting moments are when the music does stop without ending, coming to a total standstill which “could” be an ending but which then lasts only a fraction of a second. But once you get down to that level of detail the music might be seen as being perforated by thousands of tiny silences, any of which could be the last.
In your work with larger organisms like the Evan Parker Electroacoustic Ensemble and the fORCH, do you consider FURT a combined instrument, two separate instruments, or less like instruments and more like a sound design team (not that the three are so cut and dry)? Also, in those groups do you process live input?
In Evan’s group we seem to fit in somewhere between the acoustic instruments and the live processors, having some of the characteristics of both, and both here and in fORCH we tend to function as a single “organ” within the larger organism. One reason for not using live input is that we haven’t yet really found a way to do it with the kind of precision and complexity we apply to sound materials we’ve worked on and “learned” in the sense of learning an instrument, one of the many imaginary instruments contained within our single technical setup.
I read that you might call some of the music you make together to be from ecstatic states. How do you attempt to negotiate the gear you use as well as the disembodiment of its sound (coming from speakers not attached to your body) to attain this state? This leads me to think about the idea of subjectivity in electroacoustic improvisation. A fusing to one’s instrument, so the myth goes, is the only way to be flexible enough to make the instrument ‘disappear’ and let the voice (of course the traditional pure subject here) of the musician speak. But “Virtuosity is nothing other than a theatre of domestication: reinstalling, after the combat and the conquest, the “I” in mastery” [Peter Szendy]. I rather like this quote, but I think virtuosity is not quite so simple (and for the record, neither does he). But at the very least one could argue that many musicians seem to require a prosthesising of the instrument, a kind of fusing that allows the physical to absorb the mental/spiritual. This complex: the body, object, sound, spirit (I am quite fascinated by the idea of the musical prosthetic). To achieve a state of performative transcendence, must one be in that state, a prosthetic state? For the electroacoustic performer, in my experience, it seems more difficult to have this prosthetic experience. Michel Waisvisz began an entire field in responding to this by reinserting his body into the performance.
It’s only harder for the electroacoustic performer to have such a prosthetic experience (which, let’s emphasise, is primarily imagined) because it is at this point in history less familiar than with acoustic instruments. But it was once harder for people to imagine making music with those mechanical instruments rather than only with the “more natural” human voice. The fact that our keyboards are connected to the sound-producing apparatus by wires isn’t, as far as musical imagination and creativity are concerned, different in principle from the contraption of wood, metal and felt that you have in a piano mechanism. The question of “ecstatic states” is more to do with the original meaning of ecstasy, to be “outside oneself” and transcending the physical relationship between performer and instrument, also between performer and performer. Virtuosity is a factor, but again it’s only part of what’s going on – we negotiate an entire spectrum between virtuosic control and various kinds of randomness, and try to erase the distinction between them.
How do you manage frequency range and density between each other in performance?
Aside from any pre-organised structures for those things, which we usually do employ, we leave the details entirely to our collective intuition, which has evolved in tandem with the increasing density and speed of the music over the years. This intuition, though, even in this kind of area, is strongly conditioned by our fascination with serial thinking, particularly that of Stockhausen, as well as by the traditions of improvised music.
Does the idea of creating compelling music from sound as material, that is musical sound without a specific musical reference (back to source bonding I suppose) - and that is not to say noise, either - is somehow the task of the 21st century musician? Does this somehow place it outside of brute commodity this way (not to engender a full on political discussion...)? And is this (the attempt to be outside of commodification) a moral imperative that you embrace?
Yes, most certainly, though we’re not so naive as to think that there is any purely musical way to ensure that any such attempt will remain successful. Think of Beethoven. His was an extremely radical project at the time, and yet in the centuries since his death his music has been appropriated in support both of totalitarian oppression and market capitalism. The struggle must continue beyond art too, or things are going to get even worse.