Exploring radically new modes of musical interaction in live performance
Session with Jordan Morton and Miller Puckette, April 2017June 9, 2017-This session was conducted as part of preparations to the larger session in UCSD Studio A, and we worked on calibration of the analysis methods to Jordans double bass and vocals. Some of the calibration and accomodation of signals also includes ...
Playing or being played – the devil is in the delaysJune 9, 2017-Since the crossadaptive project involves designing relationsips between performative actions and sonic responses, it is also about instrument design in a wide definition of the term. Some of these relationships can be seen as direct extensions to traditional instrument features, ...
Liveconvolver experiences, San DiegoJune 7, 2017-The liveconvolver has been used in several concerts and sessions in San Diego this spring. I played three concerts with the group Phantom Station (The Loft, Jan 30th, Feb 27th and Mar 27th), where the first involved the liveconvolver. Then ...
Live convolution session in Oslo, March 2017June 7, 2017-Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation). The focus for this session was to work with the new live convolver in Ableton Live Setup - getting to know the Convolver We ...
Seminar on instrument design, software, controlMarch 24, 2017-Online seminar March 21 Trond Engum and Sigurd Saue (Trondheim) Bernt Isak Wærstad (Oslo) Marije Baalman (Amsterdam) Joshua Reiss (London) Victor Lazzarini (Maynooth) Øyvind Brandtsegg (San Diego) Instrument design, software, control We now have some tools that allow practical experimentation, ...
Live convolution with Kjell NordesonMarch 23, 2017-Session at UCSD March 14. Kjell Nordeson: Drums Øyvind Brandtsegg: Vocals, Convolver. Contact mikes In this session, we explore the use of convolution with contact mikes on the drums to reduce feedback and cross-bleed. There is still some bleed from ...
Session with classical percussion students at NTNU, February 20, 2017March 10, 2017-Introduction: This session was a first attempt in trying out cross-adaptive processing with pre-composed material. Two percussionists, Even Hembre and Arne Kristian Sundby, students at the classical section, were invited to perform a composition written for two tambourines. The musicians ...
Seminar: Mixing and timbral characterMarch 2, 2017-Online conversation with Gary Bromham (London), Bernt Isak Wærstad (Oslo), Øyvind Brandtsegg (San Diego), Trond Engum and Andreas Bergsland (Trondheim). Gyrid N. Kaldestad, Oslo, was also invited but unable to participate. The meeting revolves around the issues "mixing and timbral ...
Convolution experiments with Jordan MortonMarch 1, 2017-Jordan Morton is a bassist and singer, she regularly performs using both instruments combined. This provides an opportunity to explore how the liveconvolver can work when both the IR and the live input are generated by the same musician. We did a ...
Session UCSD 14. Februar 2017February 15, 2017-Session objective The session objective was to explore the live convolver, how it can affect our playing together and how it can be used. New convolver functionality for this session is the ability to trigger IR update via transient detection, ...
Seminar 16. DecemberFebruary 3, 2017-Philosophical and aesthetical perspectives –report from meeting 16/12 Trondheim/Skype Andreas Bergsland, Trond Engum, Tone Åse, Simon Emmerson, Øyvind Brandtsegg, Mats Claesson The performers experiences of control: In the last session (Trondheim December session) Tone and Carl Haakon (CH) worked with ...
Oslo, First Session, October 18, 2016December 12, 2016-First Oslo Session. Documentation of process 18.11.2016 Participants Gyrid Kaldestad, vocal Bernt Isak Wærstad, guitar Bjørnar Habbestad, flute Observer and Video Mats Claesson The Session took place in one of the sound studios at the Norwegian Academy of Music, Oslo ...
Seminar 21. OctoberOctober 31, 2016-We were a combination of physically present and online contributors to the seminar. Joshua Reiss and Victor Lazzarini participated via online connection, present together in Trondheim were: Maja S.K. Ratkje, Siv Øyunn Kjenstad, Andreas Bergsand, Trond Engum, Sigurd Saue and ...
Session 19. October 2016October 31, 2016-Location: Kjelleren, Olavskvartalet, NTNU, Trondheim Participants: Maja S. K. Ratkje, Siv Øyunn Kjenstad, Øyvind Brandtsegg, Trond Engum, Andreas Bergsland, Solveig Bøe, Sigurd Saue, Thomas Henriksen Session objective and focus Although the trio BRAG RUG has experimented with crossadaptive techniques in rehearsals and ...
Brief system overview and evaluationOctober 7, 2016-As preparation for upcoming discussions about tecnical needs in the project, it seems appropriate to briefly describe the current status of the software developed so far. The plugins The two main plugins developed is the Analyzer and the MIDIator. The ...
Skype on philosophical implicationsSeptember 14, 2016-Skype with Solveig Bøe, Simon Emmerson and Øyvind Brandtsegg. Our intention for the conversations was to start sketching out some of the philosophical implications of our project. Partly as a means to understand what we are doing and what it ...
Documentation as debuggingSeptember 2, 2016-This may come as no surprise to some readers, but I thought it appropriate to note anyway. During the blog writing about the rhythmic analysis - part 1, I noticed I would tighten up the definition of terms, and also the ...
Mixing with GaryJune 16, 2016-During our week in London we had some sessions with Gary Bromham, first at the Academy of Contemporary Music in Guildford on the June 7th , then at QMUL later in the week. We wanted to experiment with cross-adpative techniques ...
Seminar and meetings at Queen Mary University of LondonJune 16, 2016-June 9th and 10th we visited QMUL, met Joshua Reiss and his eminent colleagues there. We were very well taken care of and had a pleasant and interesting stay. June 9th we had a seminar presenting the project, and discussing ...
Seminar at De MontfortJune 14, 2016- Wednesday June 8th we visited Simon Emmerson at De Montfort and also met Director Leigh Landy. We were very well taken care of and had a pleasant and interesting stay. One of the main objectives was to do seminar ...
Project start meeting in TrondheimJune 14, 2016-Kickoff Monday June 6th we had a project start meeting with the NTNU based contributors: Andreas Bergsland, myself, Solveig Bøe, Trond Engum, Sigurd Saue, Carl Haakon Waadeland and Tone Åse. This gave us the opportunity to present the current state ...
Conversation with MarijeMay 30, 2016-We had a Skype meeting between me (Oeyvind) and Marije Baalman today, here's some notes from the conversation: First, we really need to find alternatives to Skype, the flakyness of connection makes it practically unusable. A service that allows recording the conversation would ...
Workflows, processing methodsMay 23, 2016-As we've now done some initial experiment sessions and gotten to know the terrain a little bit better, it seems a good time to sum up the different working methods for cross adaptive processing in a live performance setting. Some of this ...
Cross adaptive mixing in a standard DAWMay 15, 2016-To enable the use of these techniques in a common mixing situation, we’ve made some example configurations in Reaper. The idea is to extract some feature of the modulator signal (using common tools like EQs and Compressors rather than more ...
Signal (modulator) routing considerationsMay 15, 2016-Analysis happens at the source, but we may want to mix modulation signals from several analyzed sources, so where do we do this? In a separate modulation matrix, or at the destination? Mixing at the destination allows us to only ...
Interaction typesMay 15, 2016-We see several types of interactions already: ornamenting: expands or embroiders the other sound, creating features/events (in time) that was not there before transplanting: transfers its own timbral character to the other sound inhibiting: when one sound plays, the other ...
This session was conducted as part of preparations to the larger session in UCSD Studio A, and we worked on calibration of the analysis methods to Jordans double bass and vocals. Some of the calibration and accomodation of signals also includes the fun creative work of figuring out which effects and which effect parameters to map the analyses to. The session resulted in some new discoveries in this respect, for example using the spectral flux of the bass to control vocal reverb size, and using transient density to control very low range delay time modulations. Among the issues we discussed were aspects of timbral polyphony, i.e. how many simultaneous modulations can we percieve and follow?
Since the crossadaptive project involves designing relationsips between performative actions and sonic responses, it is also about instrument design in a wide definition of the term. Some of these relationships can be seen as direct extensions to traditional instrument features, like the relationship between energy input and the resulting sonic output. We can call this the mapping between input and output of the instrument. Some other relationships are more more complex, and involves how the actions of one performer affect the processing of another. That relationship can be viewed as an action of one performer changing the mapping between input and output of another instrument. Maybe another instrument is not the correct term to use, since we can view all of this as one combined super-instrument. The situation quickly becomes complex. Let’s take as step back and contemplate for a bit on some of the separate aspects of a musical instrument and what constitutes its “moving parts”.
Playing on the sound
One thing that has always inspired me with electric and electronic instruments is how the sound of the instrument can be tweaked and transformed. I know I have many fellow performers with me in saying that changing the sound of the instrument completely changes what you can and will play. This is of course true also with acoustic instrument, but it is even more clear when you can keep the physical interface identical but change the sonic outcome drastically. The comparision becomes more clear, since the performative actions, the physical interaction with the instrument does not need to change significantly. Still, when the sound of the instrument changes, even the physical gestures to produce it changes and so also what you will play. There is a connection and an identification between performer and sound, the current sound of the instrument. This als oextends to the amplification system and to the room where the sound comes out. Performers who has a lot of experience playing on big PA systems know the difference between “just” playing your instrument and playing the instrument through the sound system and in the room.
Automation in instruments
In this context, I have also mused on the subject of how much an instrument ‘does for you’. I mean, automatically, for example “smart” computer instruments that will give back (in some sense) more than you put in. Also, in terms of “creative” packages like Garageband and also much of what comes with programs like Ableton Live, where we can shuffle around templates of stuff made by others, like Photoshop beautifying filters for music. This description is not intended to paint a bleak picture of the future of creativity, but indeed is something to be aware of. In the context of our current discussion it is relevant because of the relation between input and output of the instrument; Garageband and Live, as instruments, will transform your input significantly according to their affordances. The concept is not necessarily limited to computer instruments either, as all instruments add ‘something’ that the performer could not have done by himself (without the external instrument). Also, as an example many are familiar with: playing through a delay effect: Creating beautiful rhythmic textures out of a simple input, where there may be a fine line between the moment you are playing the instrument, and all of a sudden the instrument is playing you, and all you can do is try to keep up. The devil, as they say, is in the delays!
Flow and groove
There is also a common concept among musicians, when the music flows so easily as if the instrument is playing itself. Being in the groove, in flow, transcendent, totally in the moment, or other descriptions may apply. One might argue that this phenomenon is also result of training, muscle memory, gut reaction, instinct. These are in some ways automatic processes. Any fast human reaction relies in some aspect on a learned response, processing a truly unexpected event takes several hundred milliseconds. Even if it is not automated to the same degree as a delay effect, we can say that there is not a clean division between automated and conteplated responses. We could probably delve deep into physchology to investigate this matter in detail, but for our current purposes it is sufficient to say automation is there to some degree at this level of human performance as well as in the instrument itself.
Another aspect of automation (if we in automation can include external events that triggers actions that would not have happened otherwise), or of “falling into the beat” is the synchronizing action when playing in rhythm with another performer. This has some aspects of similarity to the situation when “being played” by the delay effect. The delay processor has even more of a “chasing” effect since it will always continue, responding to every new event, non stop. Playing with another performer does not have that self continuing perpetual motion, but in some cases, the resulting groove might have.
Adaptive, in what way?
So when performing in a crossadaptive situation, what attitude could or should we attain towards the instrument and the processes therein? Should the musicians just focus on the acoustic sound, and play together more or less as usual, letting the processing unfold in its own right? From a traditionally trained performer perspective, one could expect the processing to adapt to the music that is happening, adjusting itself to create something that “works”. However, this is not the only way it could work, and perhaps not the mode that will produce the most interesting results. Another approach is to listen closely to what comes out of the processing. Perhaps to the degree that we disregard the direct sound of the (acoustic) instrument, and just focus on how the processing responds to the different performative gestures. In this mode, the performer would continually adjust to the combined system of acoustic instrument, processing, interaction with the other musician, signal interaction between the two instruments, also including any contribution from the amplification system and the ambience (e.g. playing on headphones or on a P.A). This is hard for many performers, because the complete instrument system is bigger, has more complex interactions, and sometimes has a delay from an action occurs to the system responds (might be a musical and desirable delay, or a technical artifact), plainly a larger distance to all the “moving parts” of the machinery that enables the transformation of a musical intent to a sounding result. In short, we could describe it as having a lower control intimacy. There is also of course a question of the willingness of the performer to set himself in a position where all these extra factors are allowed to count, as it will naturally render most of us in a position where we again are amateurs, not knowing how the instrument works. For many performers this is not immediately attractive. Then again, it is an opportunity to find something new and to be forced to abandon regular habits.
One aspect that I haven’t seen discussed so much is the instrumental scope of the performer. As described above, the performer may choose to focus on the acoustic and physical device that was traditionally called the instrument, and operate this with proficiency to create coherent musical statements. On the other hand, the performer may take into account the whole system (where does that end?, is it even contained in the room in which we perform the music?) of sound generation and transformation. Many expressive options and possibilities lies within the larger system, and the position of the listener/audience also oftentimes lies somewhere in the bigger space of this combined system. These reflections of course apply just as much to any performance on a PA system, or in a recording studio, but I’d venture to say they are crystallized even more clearly in the context of the crossadaptive performance.
Intellectualize the mapping?
To what degree should the performers know and care about the details of the crossadaptive modulation mappings? Would it make sense to explore the system without knowing the mapping? Just play. It is an attractive approach for many, as any musical performance situation in any case is complex with many unknown factors, so why not just throw in these ones too? This can of course be done, and some of our experiments in San Diego has been following this line of investigation (me and Kyle played this way with complex mappings, and the Studio A session between Steven an Kyle leaned towards this approach). The rationale for doing so is that with complex crossadaptive mappings, the intellectual load of just remembering all connections can override any impulsive musical incentive. Now, after doing this on some occasions, I begin to see that as a general method perhaps this is not the best way to do it. The system’s response to a performative action is in many cases so complex and relates to so many variables, that it is very hard to figure out “just by playing”. Some sort of training, or explorative process to familiarize the performer with the new expressive dimensions is needed in most cases. With complex mappings, this will be a time consuming process. Just listing and intellectualizing the mappings does not work for making them available as expressive dimensions during performance. This may be blindingly obvious after the fact, but it is indeed a point worth mentioning. Familiarization with the expressive potential takes time, and is necessary in order to exploit it. We’ve seen some very clear pedagogical approaches in some of the Trondheim sessions, and these take on the challenge of getting to know the full instrument in a step by step manner. We’ve also seen some very fruitful explorative approaches to performance in some of the Oslo sessions. Similarly, when Miller Puckette in our sessions in San Diego chooses to listen mainly to the processing (not to the direct sound of his instrument, and not to the direct sound of his fellow musician’s instrument, but to the combined result), he actively explores the farthest reaches of the space constituted by the instrumental system as a whole. Miller’s approach can work even if all the separate dimensions has not been charted and familiarized separately, basically because he focus almost exclusively on those aspects of the combined system output. As often happens in conversations with Miller, he captures the complexity and the essence of the situation in clear statements:
“The key ingredients of phrasing is time and effort.”
What about the analytical listener?
In our current project we don’t include any proper research on how this music is experienced by a listener. Still, we as performers and designers/composers are also experiencing the music as listeners, and we cannot avoid wondering how (or if) these new dimensions of expression affects the perception of the music “from the outside”. The different presentations and workshops of the project affords opportunities to hear how outside listeners perceive it. One recent and interesting such opportunity came when I was asked to present something for Katharina Rosenbergers Composition Analysis class at UCSD. The group comprised of graduate composition students, critical and highly reflective listeners, and in the context of this class especially aimed their listening towards analysis. What is in there ? How does it work musically? What is this composition? Where is the composing? In the discussions with this class, I got to ask them if they perceived it as important for the listener to know and understand the crossadaptive modulation mappings. Do they need to learn the intricacies of the interaction and the processing in the same pedagogical manner? The output from this class was quite clear on the subject:
It is the things they make the performers do that is important
In one way, we could understand it as a modernist stance that if it is in there, it will be heard and thus it matters. We could also understand it to mean that the changes in the interaction, the thing that performers will do differently in this setting is what is the most interesting. When we hear surprise (in the performer), and a subsequent change of direction, we can follow that musically without knowing about the exact details that led to the surprise.
The liveconvolver has been used in several concerts and sessions in San Diego this spring. I played three concerts with the group Phantom Station (The Loft, Jan 30th, Feb 27th and Mar 27th), where the first involved the liveconvolver. Then one concert with the band Creatures (The Loft, April 11th), where the live convolver was used with contact mikes on the drums, and live sampling IR from the vocals. I also played a duo concert with Kjell Nordeson at Bread and Salt April 13th, where the liveconvolver was used with contact mikes on the drums, live sampling IR from my own vocals. Then a duo concert with Kyle Motl at Coaxial in L.A (April 21st), where a combination of crossadaptive modulation and live convolver was used. For the duo with Kyle, I switched between using bass as the IR and vocals as the IR, letting the other instrument play through the convolver. A number of studio sessions was also conducted, with Kjell Nordeson, Kyle Motl, Jordan Morton, Miller Puckette, Mark Dresser, and Steven Leffue. A separate report on the studio sesssion in UCSD Studio A will be published later.
“Phantom Station”, The Loft, SD
This group is based on Butch Morris’ conduction language for improvisation, and the performance typically requires a specific action (specific although it is free and open) to happen on cue from the conductor. I was invited into this ensemble and encouraged to use whatever part of my instrumentarium that I might see fit. Since I had just finished the liveconvoolver plugin, I wanted to try that out. I also figured my live processing techniques would fit easily, in case the liveconvolver did not work so well. Both the live processing and the live convolution instruments was in practice less than optimal for this kind of ensemble playing. Even though the instrumental response can be fast (low latency), the way I normally use these instruments is not for making a musical statements quickly for one second and then suddenly stop again. This leads me to reflect on a quality measure I haven’t really thought of before. For lack of a better word, let’s call it reactive inertia: the possibility to completely change direction on the basis of some external and unexpected signal. This is something else than the audio latency (of the audio processing) and also something else than the user interface latency (like for example, the time it takes the performer to figure out which button to turn to achieve a desired effect). I think it has to do with the sound production process, for example how some effects take time to build up before they are heard as a distinct musical statement, and also the inertia due to interaction between humans and also the signal chain of sound production pre effects (say if you live sample or live process someone, need to get a sample, or need to get some exciter signal). For live interaction instruments, the reactive inertia is then goverened by the time it takes two performers to react to the external stimuli, and their combined efforts to be turned into sound by the technology involved. Much like what an old man once told me at Ocean Beach:
“There’s two things that needs to be ready for you to catch a wave
– You, …and the wave”.
We can of course prepare for sudden shifts in the music, and construct instruments that will be able to produce sudden shifts and fast reactions. Still, the reaction to a completely unexpected or unfamiliar stimuli will be slower than optimal. An acoustic instrument has less of these limitations. For this reason, I switched to using the Marimba Lumina for the remaning two concerts with Phantom Station, to be able to shape immediate short musical statements with more ease.
“Creatures”, The Loft, SD
Creatures is the duo of Jordan Morton (double bass, vocals) and Kai Basanta (drums). I had the pleasure of sitting in with them, making it a trio for this concert at The Loft. Creatures have some composed material in the form of songs, and combine this with long improvised stretches. For this concert I got to explore the liveconvolver quite a bit, in addition to the regular live processing and Marimba Lumina. The convolver was used with input from piezo pickups on the drums, convolving with IR live recorded from vocals. Piezo pickups can be very “impulse-like”, especially when used on percussive instruments. The pickups’ response have a generous amount of high frequencies, and a very high dynamic range. Due to the peaky impulse-like nature of the signal, it drives the convolution almost like a sample playback trigger, creating delay patterns on the input sound. Still the convolver output can become sustained and dense, when there is high activity on the triggering input. In the live mix, the result sounds somewhat similar to infinite reverb or “freeze” effects (using a trigger to capture a timbral snippet and holding that sound as long as the trigger is enabled). Here, the capture would be the IR recording, and the trigger to create and sustain the output is the activity on the piezo pickup. The causality and performer interface is very different than that of a freeze effect, but listening to it from the outside, the result is similar. These expressive limitations can be circumvented by changing the miking technique, and working in a more directed way as to what sounds goes into the convolver. Due to the relatively few control parameters, the main thing deciding how the convolver sounds is the input signals. The term causality in this context was used by Miller Puckette when talking about the relationship between performative actions and instrumental reactions.
Creatures at The Loft. A liveconvolver example can be found at 29:00 to 34:00 with Vocal IR, and briefly around 35:30 with IR from double bass.
“Nordeson/Brandtsegg duo”, Bread & Salt, SD
Duo configuration, where Kjell plays drums/perc and vibraphone, and I did live convolution, live processing and Marimba Lumina. My techniques was much like what I used with Creatures. The live convolver setup was also similar, with IR being live sampled from my vocals and the convolver being triggered by piezo pickups on Kjell’s drums. I had the opportunity to work over a longer period of time preparing for this concert together with Kjell. Because if this, we managed to develop a somewhat more nuanced utilization of the convolver techniques. Still, in the live performance situation on a PA, the technical situation made it a bit more difficult to utilize the fine grained control over the process and I felt the sounding result was similar in function to what I did together with Creatures. It works well like this, but there is potential for getting a lot more variation out of this technique.
We used a quadrophonic PA setup for this concert. Due to an error with the front-of-house patching, only 2 of the 4 lines from my electronics was recorded. Due to this fact, the mix is somewhat off balance. The recording also lacks first part of the concert, starting some 25 minutes into it.
In this duo Kyle Motl plays double bass and I do vocals, live convolution, live processing, and also crossadaptive processing. I did not use the Marimba Lumina in this setting, so some more focus was allowed for the processing. In terms of crossadaptive processing, the utilization of the techniques is a bit more developed in this configuration. We’ve had the opportunity to work over several months, with dedicated rehearsal sessions focusing on separate aspects of the techniques we wanted to explore. As it happpened during the concert, we played one long set and the different techniques was enabled as needed. Parameters that was manually controlled in parts of the set, was then delegated to crossadaptive modulations in other parts of the set. The live convolver was used freely as one out of several active live processing modules/voices. The liveconvolver with vocal IR can be heard for example from 16:25 to 20:10. Here, the IR is recorded from vocals, and the process acts as a vocal “shade” or “pad”, creating long sustained sheets of vocal sound triggeered by the double bass. Then, liveconvolver with bass IR from 20:10 to 23:15, where we switch on to full crossadaptive modulation until the end of the set. We used a complex mapping designed to respond to a variety of expressive changes. Our attitude/approach as performers was not to intellectually focus on controlling specific dimensions but to allow the adaptive processing to naturally follow whatever happened in the music.
Gringo and the Desert at Coaxial DTLA, …yes the backgorund noise is the crickets outside.
Session with Steven Leffue (Apr 28th, May 5th)
I did two rehearsal sessions together with Steven Leffue in April, as preparation for the UCSD Studio A session in May. We worked both on crossadaptive modulations and on live convolution. Especially interesting with Steven is his own use of adaptive and crossadaptive techniques. He has developed a setup in PD, where he tracks transient density and amplitude envelope over different time windows, and also uses standard deviation of transient density within these windows. The windowing and statistics he use can act somewhat like a feature we have also discussed in our crossadaptive project: a method of making an analysis “in relation to the normal level” for a given feature. Thus, a way to track relative change. Steven’s Master thesis “Musical Considerations in Interactive Design for Performance” relates to this and other issues of adaptive live performance. Notable is also his ICMC paper “AIIS: An Intelligent Improvisational System”. His music can be heard at http://www.stevenleffue.com/music.html, where the adaptive electronics is featured in “A theory of harmony” and “Futures”.
Our first session was mainly devoted to testing and calibrating the analysis methods towards use on the saxophone. In very broad terms, we notice that the different analysis streams now seem to work relatively similar on different instruments. The main diffferences are related to extraction of tone/noise balance, general brightness, timbral “pressedness” (weight of formants), and to some extent in transient detection and pitch tracking. The reason why the analysis methods now appear more robust is partly due to refinements in their implementation, and partly due to (more) experience in using them as modulators. Listening, experimentation, tweaking, and plainly just a lot of spending-time-with-them, have made for a more intuitive understanding of how each analysis dimension relates to an audio signal.
The second session was spent exploring live convolution between Sax and Vocals. Of particular interest here is the comments from Steven regarding the performative roles of IR recording vs playing the convolver. Steven states quite strongly that the one recording the IR has the most influence over the resulting music. This impression is consistent when he records the IR (and I sing through it), and when I record the IR and he plays through it. This may be caused by several things, but of special interest is that it is diametrically opposed to what many other performers have stated. Both Kyle, Jordan and Kjell in our initial sessions, voiced a higher performative intimacy, a closer connection to the output when playing through the IR. Maybe Steven is more concerned with the resulting timbre (including processed sound) than the physical control mechanism, as he routinely designs and performs with his own interactive electronics rig. Of course all musician care about the sound, but perhaps there is a difference of approach on just how to get there. With the liveconvolver we put the performers in an unfamiliar situation, and the differences in approach might just show different methods of problems solving to gain control over this situation. What I’m trying to investigate is how the liveconvolver technique works performatively, and in this, the performer’s personal and musical traits plays into the situation quite strongly. Again, we can only observe single occurences and try to extract things that might work well. There is no conclusions to be drawn on a general basis as to what works and what does not, and neither can we conclude what is the nature of this situation and this tool. One way of looking at it (I’m still just guessing) is that Steven treats the convolver as *the environment* in which music can be made. The changes to the environment determines what can be played and how that will sound, and thus, the one recording the IR controls the environment and subsequently controls the most important factor in determining the music.
In this session, we also experimented a bit with transposed and revesed IR, this being some of the parametric modifications we can make to the IR with our liveconvolver technique. Transposing can be interesting, but also quite difficult to use musically. Transposing in octave intervals can work nicely, as it will act just as much as a timbral colouring without changing pitch class. A fun fact about reversed IR as used by Steven: If he played in the style of Charlie Parker and we reversed the IR, it would sound like Evan Parker. Then, if he played like Evan Parker and we reversed the IR, it would still sound like Evan Parker. One could say this puts Evan Parker at the top of the convolution-evolutionary-saxophone tree….
Liveconvolver experiment Sax/Vocals, time reversed IR recorded by Sax.
Session with Miller Puckette, May 8th
The session was intended as “calibration run”, to see how the analysis methods responded to Miller’s guitar. This as a preparation for the upcoming bigger session in UCSD Studio A. The main objective was to determine which analysis features would work best as expressive dimensions, find the appropriate ranges, and start looking at potentially useful mappings. After this, we went on to explore the liveconvolver with vocals and guitar as the input signals. Due to the “calibration run” mode of approach, the session was not videotaped. Our comments and discussion was only dimly picked up by the microphones used for processing. Here’s a transcription of some of Millers initial comments on playing with the convolver:
“It is interesting, that …you can control aspects of it but never really control the thing. The person who’s doing the recording is a little bit less on the hook. Because there’s always more of a delay between when you make something and when you hear it coming out [when recording the IR]. The person who is triggering the result is really much more exposed, because that person is in control of the timing. Even though the other person is of course in control of the sonic material and the interior rhythms that happen.”
Since the liveconvolver has been developed and investigated as part of the research on crossadaptive techniques, I had slipped into the habit of calling it a crossadaptive technique. In discussion with Miller, he pointed out that the liveconvolver is not really *crossadaptive* as such. BUT it involves some of the same performative challenges, namely playing something that is not played solely for the purpose of it’s own musical value. The performers will sometimes need to play something that will affect the sound of the other musician in some way. One of the challenges is how to incorporate that thing into the musical narrative, taking care of how it sounds in itself, and exactly how it will affect the other performer’s sound. Playing with liveconvolver has this performative challenge, as has the regular crossadaptive modulation. One thing the live convolver does not have is the reciprocal/two-way modulation, it is more of a one-way process. The recent Oslo session on liveconvolution used two liveconvolvers simultaneously to re-introduce the two-way reciprocal dependency.
The focus for this session was to play with, fine tune and work further on the mappings we sat up during the last session at NMH in November. Due to practical reasons, we had to split the session into two half days on 13th and 19th of January
13th of January 2017
We started by analysing 4 different musical gestures for the guitar, which was skipped due to time constraints during the last session. During this analysis we found the need to specify the spread of the analysis results in addition to the region. This way we could differentiate the analysis results in terms of stability and conclusiveness. We decided to analyse the flute and vocal again to add the new parameters.
19th of January 2017
After the analysis was done, we started working on a mapping scheme which involved all 3 instruments, so that we could play in a trio setup. The mappings between flute and vocal where the same as in the November session
The analyser was still run in Reaper, but all routing, effects chain and mapping (MIDIator) was now done in Live. Because of software instability (the old Reaper projects from November wouldn’t open) and change of DAW from Reaper to Live, meant that we had to set up and tune everything from scratch.
Sound examples with comments and immediate reflections
1. Guitar & Vocal – First duo test, not ideal, forgot to mute analyser.
2. Guitar & Vocal retake – Listened back on speakers after recording. Nice sounding. Promising.
Reflection: There seems to be some elements missing, in a good way, meaning that there is space left for things to happen in the trio format. There is still need for fine-tuning of the relationship between guitar and vocal. This scenario stems from the mapping being done mainly with the trio format in mind.
3. Vocals & flute – Listened back on speakers after recording.
Reflections: dynamic soundscape, quite diverse results, some of the same situations as with take 2, the sounds feel complementary to something else. Effect tuning: more subtle ring mod (good!) compared to last session, the filter on vocals is a bit too heavy-handed. Should we flip the vocal filter? This could prevent filtering and reverb taking place simultaneously. Concern: is the guitar/vocal relationship weaker compared to vocal/flute? Another idea comes up – should we look at connecting gates or bypasses in order to create dynamic transitions between dry and processed signals?
4.Flute & Guitar
Reflections: both the flute ring mod and git delay are a bit on the heavy side, not responsive enough. Interesting how the effect transformations affect material choices when improvising.
Comments and reflections after the recording session
It is interesting to be in a situation where you, as you play, are having multi-layered focuses- playing, listening, thinking of how you affect the processing of your fellow musicians and how your sound is affected and trying to make something worth listening to. Of course we are now in an “etyde- mode”, but still striving for the goal, great output!
It seems to be a bug in the analyser tool when it comes to being consistent. Sometimes some parameters fall out. We experienced that it seems to be a good idea to run the analyse a couple of times for each sound to get the most precise result.