Bernt Isak – Cross adaptive processing as musical intervention http://crossadaptive.hf.ntnu.no Exploring radically new modes of musical interaction in live performance Tue, 27 Nov 2018 13:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 116975052 Live convolution session in Oslo, March 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/#comments Wed, 07 Jun 2017 13:28:06 +0000 http://crossadaptive.hf.ntnu.no/?p=902 Continue reading "Live convolution session in Oslo, March 2017"]]> Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation).

The focus for this session was to work with the new live convolver in Ableton Live

Setup – getting to know the Convolver

We worked in duo configurations – flute/guitar, guitar/vocal and vocal/flute

We started with spending some time exploring and understanding the controls. Our first setup was guitar/flute and we chose to start convolving in auto mode. We knew both from experience with convolution in general and from previous live convolver session reports, that sustained and percussive sounds would yield very different results and we therefore started with combinations: percussive sounds (flute) with sustained (guitar). While this made it quite clear how the convolver worked, the output was less than impressive. Next step was to switch the inputs while preserving the playing technique. Still everything seemed to sound somewhat delayed with ringing overtones. It was suggested to add in some dry signal to produce more aesthetically pleasing sounds, but at this point we decided to only listen to the wet signal, as the main goal was to explore and understand the ways of the convolver.

Sonics of convolution

We continued the process by switching from auto to triggered mode where the flute had the role of the IR to make the convolver a bit more responsive. This produced a few nice moments, but the overall result was still quite “mushy”. We explored reducing the IR size and working with IR pitch to explore ways of getting other sound qualities.

We subsequently switched from flute to vocal, working with guitar and voice in trigger mode where voice was used as IR. We also decided to add in some dry sound from the vocal, since the listeners (Mats and Bjørnar) found it much more interesting when we could hear the relation between the IR and the convolved signal. Since the convolver still felt quite slow in response and muddy in sound, we also tried out very short IR size in auto mode

As shown in this video, we tried to find out how the vocal sound could affect the processing of the flute. The experience was either that the got some strange reverberation or some strange eq- we thought that the sound that came out of the processed flute was not so interesting as we had hoped!

Trying again – doesn’t feel like we get any interesting sounds. Is it the vocal input that doesn’t give the flute sound anything to work with? The flute sound we get out is still a bit harsh in the higher frequencies and with a bit of strange reverberation.

Critical listening

All these initial tests were more or less didactic, in that we chose fixed materials (such as percussive versus sustained) in order to emphasise the effect of the convolver. After three short sessions this became a limitation that hindered improvisational flow and phrasing. Especially in the flute/vocal session this was an issue. Too often, the sonic result of the convolution was less than intriguing. We discussed whether the live convolver would lend itself more easily to composed situations, as the necessity of carefully selecting material types that would convolve in an interesting way rendered it less usable in improvised settings. We decided to add more control to the setup in order to remedy this problem.

Adding control

Gyrid commented that she felt the system was quite limiting, especially when you are used to control live processing yourself. To remedy this, we started adding parameter controls for the performers. In a session with vocal and guitar, we added a expression pedal for Bernt Isak (guitar) to control the IR size. This was the first time we had the experience of a responsive system that made musical sense.

Revised setup

After some frustrating, and from our perception, failed attempts at getting interesting musical results, we decided to revise our setup. After some discussion we came to the conclusion that our existing setup, using one live convolver, was cross adaptive in the signal control domain, but didn’t feel cross adaptive in the musical domain. Therefore we decided to add another crossing by using two live convolvers, where each instrument had the role as IR in one convolver and as input in the other convolver. We also decided to set one convolver to auto mode and the other to trigger mode for better separation and more variation in musical output.

Guitar

  • Fader 1: IR size
  • Fader 2: IR auto update-rate
  • Fader 3: Dry sound
  • Expression pedal 1: IR pitch
  • Toggle switch: switch inputs

Vocal

  • Fader 1: IR size
  • Fader 2: IR pitch

Flute

  • Fader 1: IR size
  • Fader 2: IR pitch
  • Fader 3: Dry sound

Convolvers were placed on return tracks – both panned slightly to the sides to easier distinguish the two convolvers, while also adding some stereo width.

Sound excerpt 1- flute & vocal, using two convolvers:

Bjørnar has the same setup as Bernt Isak. Better experience. It could change the way the convolver use the signal if we use different microphones – maybe one inside and one outside the flute.

It’s quite apparent to us that using sustained sounds doesn’t work very well. It seems to us that the effect just makes the flute sound less interesting, it somehow reduces the bandwidth, amplifies the resonant frequencies  or just make some strange phasing. The soundscape is changing and gets more interesting when we shift to more percussive and distorted sound qualities. Could it be an idea to make a it possible to extract only the distorted parts of a sound input?

Sound excerpt 2- guitar & vocal, using two convolvers:

Session with guitar and vocal where we control the IR size, IR pitch and IR rate.

Gyrid has the input signal in trigger mode and can control IR size and pitch with faders. Bernt Isak  has the input signal in auto mode –  and can control amount of dry signal,  IR size and rate with faders and  pitch with an expression pedal. Very positive experience to use two convolvers! Even though one convolver is cross adaptive in the sense that it uses two signals, it didn’t feel cross adaptive musically, but more like a traditional live processing setup. We also found that having one convolver in trigger mode and one in auto mode was a good way of adding movement and variation in the music as one convolver would keep a more steady “timing”, while the other one can be completely free. It also seems essential to have the possibility to control the dry signal – hearing the dry signal makes the music more three dimensional.

Sound excerpt 3- flute & guitar, using two convolvers:

Session with guitar and flute – Bjørnar has the same setup as Gyrid, but with added control for amount of dry sound. Same issue with flute microphone as above

The experience is very different with flute and vocals and guitar and vocals, this has mainly to do with the way the instruments are played. The guitar has a very distinct attack and it is very clear when the timbral character change. Flute and vocals have a more similar frequency response and the result get less interesting. Adding more effects to the guitar (distortion + tremolo) makes a huge difference- but also the fact that percussive sounds from vocal gives the most interesting musical output.

 

Overall session reflections 

Choice of instrument combinations is crucial for live convolving to be controllable and produce artistically interesting results. We also noted that there is a difference between signal cross adaptiveness and musical cross adaptiveness. From our experience, double live convolving is needed to produce a similar feel of musical cross adaptiveness as we’ve experienced from the previous signal processing cross adaptive sessions:

Ideas for further development

Some ideas for adding more control – the possibility to switch between auto mode and trigger mode could be interesting. It would also be useful with a visual indicator for the IR trigger mode for easier tuning of the trigger settings.

Bjørnar suggested to combine the live convolver with the analyzer/midimapper in order to only convolve when there is a minimum of noise and/or transients available. E.g. linking spectral crest to a wet/dry parameter in the convolution or splitting the audio signal based on spectral conditions.

It could perhaps also yield some interesting result to add some spectral processing to reduce the fundamental frequency component (similar to Øyvind Brandtseggs feedback tools) for instruments that have a very strong fundamental (like flute).

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/feed/ 1 902
Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/#comments Wed, 07 Jun 2017 12:47:40 +0000 http://crossadaptive.hf.ntnu.no/?p=759 Continue reading "Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017"]]> Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice)

The focus for this session was to play with, fine tune and work further on the mappings we sat up during the last session at NMH in November. Due to practical reasons, we had to split the session into two half days on 13th and 19th of January

13th of January 2017

We started by analysing 4 different musical gestures for the guitar, which was skipped due to time constraints during the last session. During this analysis we found the need to specify the spread of the analysis results in addition to the region. This way we could differentiate the analysis results in terms of stability and conclusiveness. We decided to analyse the flute and vocal again to add the new parameters.

19th of January 2017

After the analysis was done, we started working on a mapping scheme which involved all 3 instruments, so that we could play in a trio setup. The mappings between flute and vocal where the same as in the November session

The analyser was still run in Reaper, but all routing, effects chain and mapping (MIDIator) was now done in Live. Because of software instability (the old Reaper projects from November wouldn’t open) and change of DAW from Reaper to Live, meant that we had to set up and tune everything from scratch.

Sound examples with comments and immediate reflections

1. Guitar & Vocal – First duo test, not ideal, forgot to mute analyser.

2. Guitar & Vocal retake – Listened back on speakers after recording. Nice sounding. Promising.

Reflection: There seems to be some elements missing, in a good way, meaning that there is space left for things to happen in the trio format. There is still need for fine-tuning of the relationship between guitar and vocal. This scenario stems from the mapping being done mainly with the trio format in mind.

3. Vocals & flute – Listened back on speakers after recording.

Reflections: dynamic soundscape, quite diverse results, some of the same situations as with take 2, the sounds feel complementary to something else. Effect tuning: more subtle ring mod (good!) compared to last session, the filter on vocals is a bit too heavy-handed. Should we flip the vocal filter? This could prevent filtering and reverb taking place simultaneously. Concern: is the guitar/vocal relationship weaker compared to vocal/flute? Another idea comes up – should we look at connecting gates or bypasses in order to create dynamic transitions between dry and processed signals?

4.Flute & Guitar

Reflections: both the flute ring mod and git delay are a bit on the heavy side, not responsive enough. Interesting how the effect transformations affect material choices when improvising.

5.Trio

Comments and reflections after the recording session

It is interesting to be in a situation where you, as you play, are having multi-layered focuses- playing, listening, thinking of how you affect the processing of your fellow musicians and how your sound is affected and trying to make something worth listening to. Of course we are now in an “etyde- mode”, but still striving for the goal, great output!

It seems to be a bug in the analyser tool when it comes to being consistent. Sometimes some parameters fall out. We experienced that it seems to be a good idea to run the analyse a couple of times for each sound to get the most precise result.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/feed/ 2 759