Live convolution session in Oslo, March 2017

Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation).

The focus for this session was to work with the new live convolver in Ableton Live

Setup – getting to know the Convolver

We worked in duo configurations – flute/guitar, guitar/vocal and vocal/flute

We started with spending some time exploring and understanding the controls. Our first setup was guitar/flute and we chose to start convolving in auto mode. We knew both from experience with convolution in general and from previous live convolver session reports , that sustained and percussive sounds would yield very different results and we therefore started with combinations: percussive sounds (flute) with sustained (guitar). While this made it quite clear how the convolver worked, the output was less than impressive. Next step was to switch the inputs while preserving the playing technique. Still everything seemed to sound somewhat delayed with ringing overtones. It was suggested to add in some dry signal to produce more aesthetically pleasing sounds, but at this point we decided to only listen to the wet signal, as the main goal was to explore and understand the ways of the convolver.

Sonics of convolution

We continued the process by switching from auto to triggered mode where the flute had the role of the IR to make the convolver a bit more responsive. This produced a few nice moments, but the overall result was still quite “mushy”. We explored reducing the IR size and working with IR pitch to explore ways of getting other sound qualities.

We subsequently switched from flute to vocal, working with guitar and voice in trigger mode where voice was used as IR. We also decided to add in some dry sound from the vocal, since the listeners (Mats and Bjørnar) found it much more interesting when we could hear the relation between the IR and the convolved signal. Since the convolver still felt quite slow in response and muddy in sound, we also tried out very short IR size in auto mode

As shown in this video, we tried to find out how the vocal sound could affect the processing of the flute. The experience was either that the got some strange reverberation or some strange eq- we thought that the sound that came out of the processed flute was not so interesting as we had hoped!

Trying again – doesn’t feel like we get any interesting sounds. Is it the vocal input that doesn’t give the flute sound anything to work with? The flute sound we get out is still a bit harsh in the higher frequencies and with a bit of strange reverberation.

Critical listening

All these initial tests were more or less didactic, in that we chose fixed materials (such as percussive versus sustained) in order to emphasise the effect of the convolver. After three short sessions this became a limitation that hindered improvisational flow and phrasing. Especially in the flute/vocal session this was an issue. Too often, the sonic result of the convolution was less than intriguing. We discussed whether the live convolver would lend itself more easily to composed situations, as the necessity of carefully selecting material types that would convolve in an interesting way rendered it less usable in improvised settings. We decided to add more control to the setup in order to remedy this problem.

Adding control

Gyrid commented that she felt the system was quite limiting, especially when you are used to control live processing yourself. To remedy this, we started adding parameter controls for the performers. In a session with vocal and guitar, we added a expression pedal for Bernt Isak (guitar) to control the IR size. This was the first time we had the experience of a responsive system that made musical sense.

Revised setup

After some frustrating, and from our perception, failed attempts at getting interesting musical results, we decided to revise our setup. After some discussion we came to the conclusion that our existing setup, using one live convolver, was cross adaptive in the signal control domain, but didn’t feel cross adaptive in the musical domain. Therefore we decided to add another crossing by using two live convolvers , where each instrument had the role as IR in one convolver and as input in the other convolver. We also decided to set one convolver to auto mode and the other to trigger mode for better separation and more variation in musical output.

Guitar

  • Fader 1: IR size
  • Fader 2: IR auto update-rate
  • Fader 3: Dry sound
  • Expression pedal 1: IR pitch
  • Toggle switch: switch inputs

Vocal

  • Fader 1: IR size
  • Fader 2: IR pitch

Flute

  • Fader 1: IR size
  • Fader 2: IR pitch
  • Fader 3: Dry sound

Convolvers were placed on return tracks – both panned slightly to the sides to easier distinguish the two convolvers, while also adding some stereo width.

Sound excerpt 1- flute & vocal, using two convolvers:

Bjørnar has the same setup as Bernt Isak. Better experience. It could change the way the convolver use the signal if we use different microphones – maybe one inside and one outside the flute.

It’s quite apparent to us that using sustained sounds doesn’t work very well. It seems to us that the effect just makes the flute sound less interesting, it somehow reduces the bandwidth, amplifies the resonant frequencies  or just make some strange phasing. The soundscape is changing and gets more interesting when we shift to more percussive and distorted sound qualities. Could it be an idea to make a it possible to extract only the distorted parts of a sound input?

Sound excerpt 2- guitar & vocal, using two convolvers:

Session with guitar and vocal where we control the IR size, IR pitch and IR rate.

Gyrid has the input signal in trigger mode and can control IR size and pitch with faders. Bernt Isak  has the input signal in auto mode –  and can control amount of dry signal,  IR size and rate with faders and  pitch with an expression pedal. Very positive experience to use two convolvers! Even though one convolver is cross adaptive in the sense that it uses two signals, it didn’t feel cross adaptive musically, but more like a traditional live processing setup. We also found that having one convolver in trigger mode and one in auto mode was a good way of adding movement and variation in the music as one convolver would keep a more steady “timing”, while the other one can be completely free. It also seems essential to have the possibility to control the dry signal – hearing the dry signal makes the music more three dimensional.

Sound excerpt 3- flute & guitar, using two convolvers:

Session with guitar and flute – Bjørnar has the same setup as Gyrid, but with added control for amount of dry sound. Same issue with flute microphone as above

The experience is very different with flute and vocals and guitar and vocals, this has mainly to do with the way the instruments are played. The guitar has a very distinct attack and it is very clear when the timbral character change. Flute and vocals have a more similar frequency response and the result get less interesting. Adding more effects to the guitar (distortion + tremolo) makes a huge difference- but also the fact that percussive sounds from vocal gives the most interesting musical output.

Overall session reflections

Choice of instrument combinations is crucial for live convolving to be controllable and produce artistically interesting results. We also noted that there is a difference between signal cross adaptiveness and musical cross adaptiveness. From our experience, double live convolving is needed to produce a similar feel of musical cross adaptiveness as we’ve experienced from the previous signal processing cross adaptive sessions:

Ideas for further development

Some ideas for adding more control – the possibility to switch between auto mode and trigger mode could be interesting. It would also be useful with a visual indicator for the IR trigger mode for easier tuning of the trigger settings.

Bjørnar suggested to combine the live convolver with the analyzer/midimapper in order to only convolve when there is a minimum of noise and/or transients available. E.g. linking spectral crest to a wet/dry parameter in the convolution or splitting the audio signal based on spectral conditions.

It could perhaps also yield some interesting result to add some spectral processing to reduce the fundamental frequency component (similar to Øyvind Brandtseggs feedback tools) for instruments that have a very strong fundamental (like flute).

1 thought on “Live convolution session in Oslo, March 2017”

Leave a Reply

Your email address will not be published. Required fields are marked *