First reflections after Studio A session

This post merely sums up some of the thoughts rotating in my head right after this session in May 2017, and then again some more reflections that occured during the mixing process together with Andrew Munsie (in June). Some of the tracks are not yet mixed, and these have been sent to Gary Bromham, so we can get some reflections from him too during his mixing process.

For this session I had made a set of mappings for each duo configuration, trying to guess what would be interesting. I had made mappings that would be relatively rich and organic, some being more obvious and large-gesture, other being more nuanced and small-scale modulations. The intent was to create instruments that could be used for a longer time stretch without getting “worn out”. The mappings contained 4 to 8 features from each instrument, mapped to modulate 3 to 7 effects parameters on the other instrument. This going on in both directions (one musician modulating the other and vice versa), it sums up to a pretty complex interaction scenario. There is nothing magic about these numbers (number of features and modulators), it just happened to be the amount of modulation mappings I could conceptualize as reasonable combinations. The number of modulations (effect parameter destinations) is slightly less than the number of features becaause I would oftentimes combine several features (add, gate, etc) for each modulator. Still, I would also re-use some features for several modulators, so the number of features just slightly higher than the modulators.

 

Right after the session:

Reflection on things that could have been done differently: I think that it might perhaps have been better to use simpler parameter mappings, to get something that would be very obvious and clear, very distinctly gestural. This would perhaps have been easier for the musicians to relate intuitively to. Subtle and complex mappings are nice, but may just create a mushy din. Since they will be partly “hidden” to the musicians (due to subtlety of mapping, and also the signal balance during performance), they will not be finely controlled. Thus, to some extent, they will be randomly related to the performative gestures. Complexity adds noise too (on could say), both for performer and for listeners of the music. Selection of effects is also just as important as the parameter mappings. Try to make something that is more gesturally responsive. One specific element that was problematic was the delay time change without pitch modification. Perhaps this is not so great. Not easy to control for the performer, and not easy perceived (by the other performer, or for an external listener) either. Related to the liveconvolver takes, I realize that the convolver effect is not so much gestural, but more a block-wise imposition of one sound on another. (Obvious enough when one think about it, but still worth mentioning).

Reflections during mixing:

We hear rich interactions, the subtle nuances work well (contrary to reflections right after the session). One does not really have to decode or intellectualize the mapping, just go with the flow, listen. Sometimes I listen for a specific modulation and totally loose the context and musical meaning. Still, in the mixing process, this is natural and necessary.
The comments of Kyle and Steven that they “would play the same anyway” comes in a different light now, as it is hard to imagine you would not change the performance in response to the processing. The instruments and the processing constitutes a whole, perhaps more easily perceived as a whole now in hindsight, but this may very well relate to listening habit. Getting to know this musical situation better will make the whole easier to perceive during performance. Still, the musicians would to some extent not expressively utilize the potential of the complex mappings. This is partly because they did not know exactly the details, …perhaps I got exactly what I set it up to do: not telling the mapppings and making them “rich”.

Session in UCSD Studio A (preliminary post)

The post is preliminary in that we still lack some of the mixes from this session, however, I wanted to get writing about it before I forget…

This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all these performers in different constellations, some new combinations were tested this day. The approach was to explore fairly complex feature-modulator mappings. No particular focus was made on intellectualizing the details of these mappings, but rather experiencing them as a whole, “as instrument”. I had found that simple mappings, although easy to decode and understand for both performer and listener, quickly would “wear out” and become flat, boring or plainly limiting for musical development during the piece. I attempted to create some “rich” mappings, with combinations of different levels of subtlety. Some clearly audible and some subtle timbral effects. The mappings were designed with some specific musical gestures and interactions in mind, and these are listed together with the mapping details for each constellation later in this post.

During this session, we also explored the live convolver in terms of how the audio content in the IR affects the resulting creative options and performative environment for the musician playing through the effect. The liveconvolver takes are presented interspersed with the crossadaptive “feature-modulator” (one could say “proper crossadaptive”) takes. Recording of the impulse response for the convolution was triggered via an external pedal controller during performance, and we let each musician in turn have the role of IR recorder.

Participants:
Jordan Sand Morton: double bass and voice
Miller Puckette: guitar
Steven Leffue: sax
Kyle Motl: double bass
Oeyvind Brandtsegg: crossadaptive mapping design, processing
Andrew Munsie: recording engineer

The music played was mostly free improvisations, but two of the takes with Jordan Sand Morton was performances of her compositions. These were composed in dialogue with the system, during and in between, earlier sessions. She both plays the bass and sings, and wanted to explore how phrasing and shaping of precomposed material could be used to expressively control the timbral modulations of the effects processing.

Jordan Sand Morton: bass and voice.

These pieces are composed by Jordan, and she has composed it with an intention of being performed freely, and shaped according to the situation at performance time, allowing the crossaptive modulations ample room for influence on the sound.

Jordan Sand Morton I confess

“I confess” (Jordan Sand Morton). Bass and voice.

 

Jordan Sand Morton Backbeat thing

“Backbeat thing” (Jordan Sand Morton). Bass and voice.

 

The effects used:
Effects on vocals: Delay, Resonant distorted lowpass
Effects on bass: Reverb, Granular tremolo

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Bass spectral flatness, and
  • Bass spectral flux: both features giving lesser reverb time on bass

Purpose: When the bass becomes more noisy, it will get less reverb

  • Vocal envelope dynamics (dynamic range), and
  • Vocal transient density: both features giving lower lowpass filter cutoff frequency on reverb on bass

Purpose: When the vocal becomes more active, the bass reverb will be less pronounced

  • Bass transient density: higher cutoff frequency (resonant distorted lowpass filter) on vocal

Purpose: to animate a distorted lo-fi effect on the vocals, according to the activity level on bass

  • Vocal mfcc-diff (formant strength, “pressed-ness”): Send level for granular tremolo on bass

Purpose: add animation and drama to the bass when the vocal becomes more energetic

  • Bass transient density: lower lowpass filter frequency for the delay on vocal

Purpose: clean up vocal delays when basse becomes more active

  • Vocal transient density: shorter delay time for the delay on vocal
  • Bass spectral flux: longer delay time for the delay on vocal

Purpose: just for animation/variation

  • Vocal dynamic range, and
  • Vocal transient density: both features giving less feedback for the delay on vocal

Purpose: clean up vocal delay for better articulation on text

 

Liveconvolver tracks Jordan/Jordan:

The tracks are improvisations. Here, Jordan’s voice was recorded as the impulse response and she played bass through the voice IR. Since she plays both instruments, this provides a unique approach to the live convolution performance situation.

***

[these two tracks are still in the mixing process, and will be uploaded here as soon as they are ready]

***

Jordan Sand Morton and Miller Puckette

Liveconvolver tracks Jordan/Miller:

These tracks was improvised by Jordan Sand Morton (bass) and Miller Puckette (guitar). Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

Improvised liveconvolver performance, Jordan Sand Morton (bass) and Miller Puckette (guitar). Miller records the IR.

Improvised liveconvolver performance, Jordan Sand Morton (bass) and Miller Puckette (guitar). Jordan records the IR.

 

Discussion on the performance with live convolution, with Jordan Sand Morton and  Miller Puckette.

Miller Puckette and Steven Leffue

These tracks was improvised by Miller Puckette (guitar) and Steven Leffue. The feature-modulator mapping was designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping before the first take. The intention of this strategy was to create an naturally flowing environment of exploration, with not-too-obvious relationships between instrumental gestures and resulting modulations. After the first take, some more detail of selected elements (one for each musician) of the mapping were repeated for the performers, with the anticipation that these features might be explored more consciously.

Take 1:

Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 1.  Details of the feature-modulator mapping is given below.

Discussion 1 on the crossadaptive performance, with Miller Puckette and Steven Leffue. On the relationship between what you play and how that modulates the effects, on balance of monitoring, and other issues.

The effects used:
Effects on guitar: Spectral delay
Effects on sax: Resonant distorted lowpass, Spectral shift, Reverb

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Guitar envelope crest: longer reverb time on sax

Purpose: dynamic guitar playing will make a big room for the sax

  • Guitar transient density: higher cutoff frequency for reverb highpass filter and lower cutoff frequency for reverb lowpass filter

Purpose: when guitar is more active, the reverb on sax will be less full (less highs and less lows)

  • Guitar transient density (again): downward spectral shift on sax

Purpose: animation and variation

  • Guitar spectral flux: higher cutoff frequency (resonant distorted lowpass filter) on sax

Purpose: just for animation and variation. Note that spectral flux (especially on the guitar) will also give high values on single notes in the low register (the lowest octave), in addition to the expected behaviour of giving higher values on more noisy sounds.

  • Sax envelope crest: less delay send on guitar

Purpose: more dynamic sax playing will “dry up” the guitar delays, must play long notes to open the sending of guitar to delay

  • Sax transient density: longer delay time on guitar. This modulation mapping was also gated by the rms amplitude of the sax (so that it is only active when sax gets loud)

Purpose: load and fast sax will give more distinct repetitions (further apart) on the guitar delay

  • Sax pitch: increase spectral delay shaping of the guitar (spectral delay with different delay times for each spectral band)

Purpose: more unnatural (crazier) effect on guitar when sax goes high

  • Sax spectral flux: more feedback on guitar delay

Purpose: noisy sax playing will give more distinct repetitions (more repetitions) on the guitar delay

Take 2:

Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 2. The feature-modulator mapping was the same as for take 1.

Discussion 2 on the crossadaptive performance, with Miller Puckette and Steven Leffue. Instructions and intellectualizing the mapping made it harder

Liveconvolver tracks:

Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Miller records the IR.

Discussion 1 on playing with the live convolver, with Miller Puckette and Steven Leffue.

Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Steven records the IR.

Discussion 2 on playing with the live convolver, with Miller Puckette and Steven Leffue.

 

Steven Leffue and Kyle Motl

Two different feature-modulator mappings was used, and we present one take of each mapping.  Like the mappings used for Miller/Steven, these were designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping. The mapping used for the first take closely resembles the mapping for Steven/Miller, with just a few changes to accomodate for the different musical context and how the analysis methods responds to the instruments.

  • Bass transient density: shorter reverb time on sax
  • The reverb equalization (highpass and lowpass was skipped
  • Bass envelope crest: increase send level for granular processing on sax
  • Bass rms amplitude: Parametric morph between granular tremolo and granular time stretch on sax

***

[this track (24) is still in the mixing process, and will be uploaded here as soon as it is ready]

***

On the first crossadaptive take in this duo, Kyle commented that the amount of delay made it hard to play, that any fast phrases just would turn into a mush. It seemed the choice of effects and the modulations was not optimal, so we tried another configuration of effects (and thus another mapping of features to modulators)

***

[this track (24) is still in the mixing process, and will be uploaded here as soon as they are ready]

***

This mapping had earlier been used for duo playing between Kyle (bass) and Øyvind (vocal) on several occations, and it was merely adjusted to accomodate for the different timbral dynamics of the saxophone. In this way, Kyle was familiar with the possibilities of the mapping, but not with the context in which it would be used.
The granular processing done on both instrument was done with the Hadron Particle Synthesizer, which allows a multidimensional parameter navigation through a relatively simple modulation interface (X, Y and 4 expression controllers). The specifics of the actual modulation routing and mapping within Hadron can be described, but it was thought that in the context of the current report, further technical detail would only take away from the clarity of the presentation. Even though the details of the parameter mapping was designed deliberately, at this point in the performative approach to playing with it, we just did no longer pay attention to technical specifics. Rather, the focus was on letting go and trying to experience the timbral changes rather than intellectualizing them.

The effects used:
Effects on sax: Delay, granular processing
Effects on bass: Reverb, granular processing

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Sax envelope crest: shorter reverb time on bass
  • Sax rms amp: higher cutoff frequency for reverb highpass filter

Purpose: louder sax will make the bass reverb thinner

  • Sax transient density: lower cutoff frequency for reverb lowpass filter
  • Sax envelope dynamics (dynamic range): higher cutoff frequency for reverb lowpass filter

Purpose: faster sax playing will make the reverb less prominent, but more dynamic playing will enhance it

  • Sax spectral flux: Granular processing state morph (Hadron X-axis) on bass
  • Sax envelope dynamics: Granular processing state morph (Hadron Y-axis) on bass
  • Sax rms amplitude: Granular processing state morph (Hadron Y-axis) on bass

Purpose: animation and variation

  • Bass spectral flatness: higher cutoff frequency of the delay feedback path on sax
    Purpose: more noisy bass playing will enhance delayed repetitions
  • Bass envelope dynamics: less delay feedback on sax
    Purpose: more dynamic playing will give less repetitions in delay on sax
  • Bass pitch: upward spectral shift on sax

Purpose: animation and variation, pulling in same direction (up pitch equals shift up)

  • Bass transient density: Granular process expression 1 (Hadron) on sax
  • Bass rms amplitude: Granular process expression 2 & 3 (Hadron) on sax
  • Bass rhythmic irregularity: Granular process expression 4 (Hadron) on sax
  • Bass MFCC diff: Granular processing state morph (Hadron X-axis) on sax
  • Bass envelope crest: Granular processing state morph (Hadron Y-axis) on sax

Purpose: multidimensional and rich animation and variation

On the second crossadaptive take between Steven and Kyle, I asked: “Does this hinder interaction or does or make something interesting happen?”
Kyle says it hinders the way they would normally play together. “We can’t go to our normal thing because there’s a third party, the mediation in between us. It is another thing to consider.” Also, the balance between the acoustic sound and the processing is difficult. This is even more difficult when playing with headphones, as the dynamic range and response is different. Sometimes the processing will seem very quiet in relation to the acoustic sound of the instruments, and at other times it will be too loud.
Steven says at one point he started not paying attention to the processing and focused mostly on what Kyle was doing. “Just letting the processing be the reaction to that, not treating it as an equal third party. … Totally paying attention to what the other musician is doing and just keeping up with him, not listening to myself.” This also mirrors the usual options of improvisational listening strategy and focus, of listening to the whole or focusing on specific elements in the resulting sound image.

Longer reflective conversation between Steven Leffule, Kyle Motl and Øyvind Brandtsegg. Done after the crossadaptive feature-modulator takes, touching on some of the problems encountered, but also reflecting on the wider context of different kinds of music accompaniment systems.

Liveconvolver tracks:

Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

***

[these two tracks are still in the mixing process, and will be uploaded here as soon as they are ready]

***

Discussion 1 on playing with the live convolver, with Steven Leffue and Kyle Motl.

Discussion 2 on playing with the live convolver, with Steven Leffue and Kyle Motl.

Session with Jordan Morton and Miller Puckette, April 2017

This session was conducted as part of preparations to the larger session in UCSD Studio A, and we worked on calibration of the analysis methods to Jordans double bass and vocals. Some of the calibration and accomodation of signals also includes the fun creative work of figuring out which effects and which effect parameters to map the analyses to. The session resulted in some new discoveries in this respect, for example using the spectral flux of the bass to control vocal reverb size, and using transient density to control very low range delay time modulations. Among the issues we discussed were aspects of timbral polyphony, i.e. how many simultaneous modulations can we percieve and follow?

 

Playing or being played – the devil is in the delays

Since the crossadaptive project involves designing relationsips between performative actions and sonic responses, it is also about instrument design in a wide definition of the term. Some of these relationships can be seen as direct extensions to traditional instrument features, like the relationship between energy input and the resulting sonic output. We can call this the mapping between input and output of the instrument. Some other relationships are more more complex, and involves how the actions of one performer affect the processing of another. That relationship can be viewed as an action of one performer changing the mapping between input and output of another instrument. Maybe another instrument is not the correct term to use, since we can view all of this as one combined super-instrument. The situation quickly becomes complex. Let’s take as step back and contemplate for a bit on some of the separate aspects of a musical instrument and what constitutes its “moving parts”.

Playing on the sound

One thing that has always inspired me with electric and electronic instruments is how the sound of the instrument can be tweaked and transformed. I know I have many fellow performers with me in saying that changing the sound of the instrument completely changes what you can and will play. This is of course true also with acoustic instrument, but it is even more clear when you can keep the physical interface identical but change the sonic outcome drastically. The comparision becomes more clear, since the performative actions, the physical interaction with the instrument does not need to change significantly. Still, when the sound of the instrument changes, even the physical gestures to produce it changes and so also what you will play.  There is a connection and an identification between performer and sound, the current sound of the instrument. This als oextends to the amplification system and to the room where the sound comes out. Performers who has a lot of experience playing on big PA systems know the difference between “just” playing your instrument and playing the instrument through the sound system and in the room.

Automation in instruments

In this context, I have also mused on the subject of how much an instrument ‘does for you’. I mean, automatically, for example “smart” computer instruments that will give back (in some sense) more than you put in. Also, in terms of “creative” packages like Garageband and also much of what comes with programs like Ableton Live, where we can shuffle around templates of stuff made by others, like Photoshop beautifying filters for music. This description is not intended to paint a bleak picture of the future of creativity, but indeed is something to be aware of. In the context of our current discussion it is relevant because of the relation between input and output of the instrument; Garageband and Live, as instruments, will transform your input significantly according to their affordances. The concept is not necessarily limited to computer instruments either, as all instruments add ‘something’ that the performer could not have done by himself (without the external instrument). Also, as an example many are familiar with: playing through a delay effect: Creating beautiful rhythmic textures out of a simple input, where there may be a fine line between the moment you are playing the instrument, and all of a sudden the instrument is playing you, and all you can do is try to keep up. The devil, as they say, is in the delays!

Flow and groove

There is also a common concept among musicians, when the music flows so easily as if the instrument is playing itself. Being in the groove, in flow,  transcendent, totally in the moment, or other descriptions may apply.  One might argue that this phenomenon is also result of training, muscle memory, gut reaction, instinct. These are in some ways automatic processes. Any fast human reaction relies in some aspect on a learned response, processing a truly unexpected event takes several hundred milliseconds. Even if it is not automated to the same degree as a delay effect, we can say that there is not a clean division between automated and conteplated responses. We could probably delve deep into physchology to investigate this matter in detail, but for our current purposes it is sufficient to say automation is there to some degree at this level of human performance as well as in the instrument itself.

Another aspect of automation (if we in automation can include external events that triggers actions that would not have happened otherwise), or of “falling into the beat” is the synchronizing action when playing in rhythm with another performer. This has some aspects of similarity to the situation when “being played” by the delay effect. The delay processor has even more of a “chasing” effect since it will always continue, responding to every new event, non stop. Playing with another performer does not have that self continuing perpetual motion, but in some cases, the resulting groove might have.

Adaptive, in what way?

So when performing in a crossadaptive situation, what attitude could or should we attain towards the instrument and the processes therein? Should the musicians just focus on the acoustic sound, and play together more or less as usual, letting the processing unfold in its own right? From a traditionally trained performer perspective, one could expect the processing to adapt to the music that is happening, adjusting itself to create something that “works”. However, this is not the only way it could work, and perhaps not the mode that will produce the most interesting results. Another approach is to listen closely to what comes out of the processing. Perhaps to the degree that we disregard the direct sound of the (acoustic) instrument, and just focus on how the processing responds to the different performative gestures. In this mode, the performer would continually adjust to the combined system of acoustic instrument, processing, interaction with the other musician, signal interaction between the two instruments, also including any contribution from the amplification system and the ambience (e.g. playing on headphones or on a P.A). This is hard for many performers, because the complete instrument system is bigger, has more complex interactions, and sometimes has a delay from an action occurs to the system responds (might be a musical and desirable delay, or a technical artifact), plainly a larger distance to all the “moving parts” of the machinery that enables the transformation of a musical intent to a sounding result. In short, we could describe it as having a lower control intimacy. There is also of course a question of the willingness of the performer to set himself in a position where all these extra factors are allowed to count, as it will naturally render most of us in a position where we again are amateurs, not knowing how the instrument works. For many performers this is not immediately attractive. Then again, it is an opportunity to find something new and to be forced to abandon regular habits.

One aspect that I haven’t seen discussed so much is the instrumental scope of the performer. As described above, the performer may choose to focus on the acoustic and physical device that was traditionally called the instrument, and operate this with proficiency to create coherent musical statements. On the other hand, the performer may take into account the whole system (where does that end?, is it even contained in the room in which we perform the music?) of sound generation and transformation. Many expressive options and possibilities lies within the larger system, and the position of the listener/audience also oftentimes lies somewhere in the bigger space of this combined system. These reflections of course apply just as much to any performance on a PA system, or in a recording studio, but I’d venture to say they are crystallized even more clearly in the context of the crossadaptive performance.

Intellectualize the mapping?

To what degree should the performers know and care about the details of the crossadaptive modulation mappings? Would it make sense to explore the system without knowing the mapping? Just play. It is an attractive approach for many, as any musical performance situation in any case is complex with many unknown factors, so why not just throw in these ones too? This can of course be done, and some of our experiments in San Diego has been following this line of investigation (me and Kyle played this way with complex mappings, and the Studio A session between Steven an Kyle leaned towards this approach). The rationale for doing so is that with complex crossadaptive mappings, the intellectual load of just remembering all connections can override any impulsive musical incentive. Now, after doing this on some occasions, I begin to see that as a general method perhaps this is not the best way to do it. The system’s response to a performative action is in many cases so complex and relates to so many variables, that it is very hard to figure out “just by playing”. Some sort of training, or explorative process to familiarize the performer with the new expressive dimensions is needed in most cases. With complex mappings, this will be a time consuming process. Just listing and intellectualizing the mappings does not work for making them available as expressive dimensions during performance. This may be blindingly obvious after the fact, but it is indeed a point worth mentioning. Familiarization with the expressive potential takes time, and is necessary in order to exploit it. We’ve seen some very clear pedagogical approaches in some of the Trondheim sessions, and these take on the challenge of getting to know the full instrument in a step by step manner. We’ve also seen some very fruitful explorative approaches to performance in some of the Oslo sessions. Similarly, when Miller Puckette in our sessions in San Diego chooses to listen mainly to the processing (not to the direct sound of his instrument, and not to the direct sound of his fellow musician’s instrument, but to the combined result), he actively explores the farthest reaches of the space constituted by the instrumental system as a whole. Miller’s approach can work even if all the separate dimensions has not been charted and familiarized separately, basically because he focus almost exclusively on those aspects of the combined system output. As often happens in conversations with Miller, he captures the complexity and the essence of the situation in clear statements:

“The key ingredients of phrasing is time and effort.”

What about the analytical listener?

In our current project we don’t include any proper research on how this music is experienced by a listener. Still, we as performers and designers/composers are also experiencing the music as listeners, and we cannot avoid wondering how (or if) these new dimensions of expression affects the perception of the music “from the outside”. The different presentations and workshops of the project affords opportunities to hear how outside listeners perceive it.  One recent and interesting such opportunity came when I was asked to present something for Katharina Rosenbergers Composition Analysis class at UCSD. The group comprised of graduate composition students, critical and highly reflective listeners, and in the context of this class especially aimed their listening towards analysis.  What is in there ? How does it work musically?  What is this composition? Where is the composing? In the discussions with this class, I got to ask them if they perceived it as important for the listener to know and understand the crossadaptive modulation mappings. Do they need to learn the intricacies of the interaction and the processing in the same pedagogical manner? The output from this class was quite clear on the subject:

It is the things they make the performers do that is important

In one way, we could understand it as a modernist stance that if it is in there, it will be heard and thus it matters. We could also understand it to mean that the changes in the interaction, the thing that performers will do differently in this setting is what is the most interesting. When we hear surprise (in the performer), and a subsequent change of direction, we can follow that musically without knowing about the exact details that led to the surprise.

 

 

Live convolution session in Oslo, March 2017

Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation).

The focus for this session was to work with the new live convolver in Ableton Live

Continue reading “Live convolution session in Oslo, March 2017”