Sessions – Cross adaptive processing as musical intervention http://crossadaptive.hf.ntnu.no Exploring radically new modes of musical interaction in live performance Tue, 27 Nov 2018 13:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 116975052 Session with Kim Henry Ortveit http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/#respond Thu, 22 Feb 2018 10:12:12 +0000 http://crossadaptive.hf.ntnu.no/?p=1192 Continue reading "Session with Kim Henry Ortveit"]]> Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a Seaboard) and some drum pads. Modulations and patterns played on one part of the instrument will determine how other components of the instrument actually sound out. This is combined with some intricate layering, looping, and quantization techniques that allows shaping of the musical continuum in novel ways. Since the instrument in itself is crossadaptive between its consituent parts (and simply also because we think it sounds great), we wanted to experiment with it within the crossadaptive project too.


Kim Henry’s instrument

The session was done as an interplay between Kim Henry on his instrument and Øyvind Brandtsegg on vocals and crossadaptive processing. It was conducted as an initial exploration of just “seeing what would happen” and trying out various ways of making crossadaptive connections here and there. The two audio examples here are excerpts of longer takes.

Take 1, Kim Henry Ortveit and Øyvind Brandtsegg

 

2018_02_Kim_take2
 2018_02_Kim_take2

Take 2, Kim Henry Ortveit and Øyvind Brandtsegg

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/feed/ 0 1192
Session with Michael Duch http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/#respond Thu, 22 Feb 2018 10:11:02 +0000 http://crossadaptive.hf.ntnu.no/?p=1190 Continue reading "Session with Michael Duch"]]> February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is a step back in complexity from the crossadaptive interplay, but is interesting for two reasons: One is to check how useful our techniques of modulation is in a setting with more traditional performer control. Where there is only one performer modulating himself, there is a closer relationship between performer intention and timbral result. And two: the reason to do this specifically with Michael is that we know from his work with Lemur and other settings that he intently and intimately relates to the performance environment, the resonances of the room and the general ambience. Due to this focus, we also wanted to use live convolution techniques where he first records an impulse response and then himself play through the same filter. This exposed one feature needed in the live convolver, where one might want to delay the activation of the new impulse response until its recording is complete (otherwise we most certainly will get extreme resonances while recording, since the filter and the exitation is very similar). That technical issue aside, it was musically very interesting to hear how he explored resonances in his own instrument, and also used small amounts of detuning to approach beating effects in the relation between filter and exitation signal. The self-convolution also exposes parts of the instrument spectrum that usually is not so noticeable, like bassy components of high notes, or prominent harmonics that otherwise would be perceptually masked by their merging into the full tone of the instrument.

 

2018_02_Michael_test1
 2018_02_Michael_test1

Take 1,  autoadaptive exploration

2018_02_Michael_test2
 2018_02_Michael_test2

Take 2,  autoadaptive exploration

Self convolution

2018_02_Michael_conv1
 2018_02_Michael_conv1

Self-convolution take 1

2018_02_Michael_conv2
 2018_02_Michael_conv2

Self-convolution take 2

2018_02_Michael_conv3
 2018_02_Michael_conv3

Self-convolution take 3

2018_02_Michael_conv4
 2018_02_Michael_conv4

Self-convolution take 4

2018_02_Michael_conv5
 2018_02_Michael_conv5

Self-convolution take 5

2018_02_Michael_conv6
 2018_02_Michael_conv6

Self-convolution take 6

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/feed/ 0 1190
Session with David Moss in Berlin http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/#respond Fri, 02 Feb 2018 10:37:09 +0000 http://crossadaptive.hf.ntnu.no/?p=1170 Continue reading "Session with David Moss in Berlin"]]> Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration with David Moss, as we have learned so much about performance, improvisation and music in general from him on earlier occations.
Earlier the same week I had presented the crossadaptive project for prof. De Campo’s students of computational art and performance with complex systems. This environment of arts and media studies at UdK was particularly conductive to our research, and we had some very interesting discussions.

David Moss – vocals
Øyvind Brandtsegg – crossadaptive processing, vocals
Alberto De Campo – observer
Marija Mickevica – observer, and vocals on one take

 

More details on these tracks will follow, currently I just upload them here so that the involved parties might get access.

DavidMoss_Take0
 DavidMoss_Take0

Initial exploration, the (becoming) classic reverb+delay crossadaptive situation

DavidMoss_Onefx1
 DavidMoss_Onefx1

Test session, exploring one effect only

DavidMoss_Onefx2
 DavidMoss_Onefx2

Test session, exploring one effect only  (2)

DavidMoss_Take1
 DavidMoss_Take1

First take

DavidMoss_Take2
 DavidMoss_Take2

Second take

DavidMoss_Take3
 DavidMoss_Take3

Third take

Then we did some explorations of David telling stories, live convolving with Øyvind’s impulse responses.

DavidMoss_Story0
 DavidMoss_Story0

Story 1

DavidMoss_Story1
 DavidMoss_Story1

Story 2

And we were lucky that student Marija Mickevica wanted to try recording live impulse responses while David was telling stories. Here’s an example:

DavidMoss_StoryMarija
 DavidMoss_StoryMarija

Story with Marija’s impulse responses

And a final take with David and Øyvind, where all previously tested effects and crossadaptive mappings were enabled. Selective mix of effects and modulations was controlled manually by Øyvind during the take.

DavidMoss_LastTake
 DavidMoss_LastTake

Final combined take

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/feed/ 0 1170
Session with 4 singers, Trondheim, August 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/10/09/session-with-4-singers-trondheim-august-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/10/09/session-with-4-singers-trondheim-august-2017/#respond Mon, 09 Oct 2017 10:28:45 +0000 http://crossadaptive.hf.ntnu.no/?p=1028 Continue reading "Session with 4 singers, Trondheim, August 2017"]]>

Location: NTNU, Studio Olavshallen.

Date: August 28 2017

Participants:
Sissel Vera Pettersen, vocals
Ingrid Lode, vocals
Heidi Skjerve, vocals
Tone Åse, vocals

Øyvind Brandtsegg, processing
Andreas Bergsland, observer and video documentation
Thomas Henriksen, sound engineer
Rune Hoemsnes, sound engineer

We also had the NTNU documentation team (Martin Kristoffersen and Ola Røed) making a separate video recording of the session.

Session objective and focus:
We wanted to try out crossadaptive processing with similar instruments. Until this session, we had usually used it on a combination of two different instruments, leading to very different analysis conditions. The analysis methods reponds a bit differently to each instrument type, and they also each “trigger” the processing in particular manner. It was thought interesting to try some experiments under more “even” conditions. Using four singers and combining them in different duo configurations, we also saw the potential for gleaming personal expressive differences and approaches to the crossadaptive performance situation. This also allowed them to switch roles, i.e. performing under the processing condition where they previously had the modulating role. No attempt was made to exhaustively try every possible combination of roles and effects, we just wanted to try a variety of scenarios possible with the current resources. The situation proved interesting in so many ways, and further exploration of this situation would be neccessary to probe further the research potential herein.
In addition to the analyzer-modulator variant of crossadaptive processing, we also did several takes of live convolution and streaming convolution. This session was the very first performative exploration of streaming convolution.

We used a reverb (Valhalla) on one of the signals, and a custom granular reverb (partikkelverb) on the other. The crossadaptive mappings was first designed so that each of the signals could have a “prolongation” effect (larger size for the reverb, more time smearing for the granular effect). However, after the first take, it seemed that the time smearing for the granular effect was not so clearly perceived as a musical gesture. We then replaced the time smearing parameter of the granular effect with a “graininess” parameter (controlling grain duration). This setting was used for the remaining takes. We used transient density combined with amplitude to control the reverb size, where louder and faster singing would make the reverb shorter (smaller). We used dynamic range to control the time smearing parameter of the granular effect, and used transient density to control the grain size (faster singing makes the grains shorter).

Video digest of the session

Crossadaptive analyzer-modulator takes

ed_XA 01
 ed_XA 01

Crossadaptive take 1: Heidi/Ingrid
Heidi has a reverb controlled by Ingrids amplitude and transient density
– louder and faster singing makes the reverb shorter
Ingrid has a time smearing effect.
– time is more slowed down when Heidi use a larger dynamic range

 

ed_XA 02
 ed_XA 02

Crossadaptive take 2: Heidi/Sissel
Heidi has a reverb controlled by Sissels amplitude and transient density
– louder and faster singing makes the reverb shorter
Sissel has a granular effect.
– the effect is more grainy (shorter grain duration) when Heidi play with a higher transient density (faster)

 

ed_XA 03
 ed_XA 03

Crossadaptive take 3: Sissel/Tone
Sissel has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Sissel play with a higher transient density (faster)

 

ed_XA 04
 ed_XA 04

Crossadaptive take 4: Tone/Ingrid
Ingrid has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Ingrid play with a higher transient density (faster)

 

ed_XA 05
 ed_XA 05

Crossadaptive take 5: Tone/Ingrid
Same settings as for take 4

Convolution

Doing live convolution with two singers was thought interesting for the same reasons as listed in the introduction, creating a controlled scenario with two similarly-featured signals. As vocal is in itself one of the richest instruments in terms of signal variation, it was also intersting to explore convolution wwith these instruments. We used the now familiar live convolution techniques, where one of the performers record an impulse response and the other plays through it. In addition, we explored streaming convolution, developed by Victor Lazzarini as part of this project. In streaming convolution, the two signals are treated even more equally that what is the case in live convolution. Streaming convolution simply convolves two circular buffers of a predetermined length, allowing both signals to have the exact same role in relation to the other. It also has a “freeze mode”, where updating of the buffer is suspended, allowing one or the other (or both) of the signals to be kept stationary as a filter for the other. This freezing was controlled by a physical pedal, in the same manner as we use a pedal to control IR sampling with live convolution. In some of the videos one can see the singers raising their hand, as a signal to the other that they are now freezing their filter. When the signal is not frozen (i.e. streaming), there is a practically indeterminate latency in the process as seen from the performer’s perspective. This stems from the fact that the input stream is segmented with respect to the filter length. Any feature recorded into the filter will have a position in the filter dependent on when it was recorded, and the perceived latency between an input impulse and the convolver output of course relies on where in the “impulse response” the most significant energy or transient can be found. The techical latency of the filter is still very low, but the perceived latency depends on the material.

ed_LC 01
 ed_LC 01

Liveconvolver take 1: Tone/Sissel
Tone records the IR

 

ed_LC 02
 ed_LC 02

Liveconvolver take 2: Tone/Sissel
Sissel records the IR

 

ed_LC 03
 ed_LC 03

Liveconvolver take 3: Heidi/Sissel
Sissel records the IR

 

ed_LC 04
 ed_LC 04

Liveconvolver take 4: Heidi/Sissel
Heidi records the IR

 

ed_LC 05
 ed_LC 05

Liveconvolver take 5: Heidi/Ingrid
Heidi records the IR

Streaming Convolution

These are the very first performative explorations of the streaming convolution technique.

ed_TVC
 ed_TVC

Streaming convolution take 1: Heidi/Sissel

 

ed_TVC2
 ed_TVC2

Streaming convolution take 2: Heidi/Tone

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/10/09/session-with-4-singers-trondheim-august-2017/feed/ 0 1028
Session in UCSD Studio A http://crossadaptive.hf.ntnu.no/index.php/2017/09/08/session-in-ucsd-studio-a/ http://crossadaptive.hf.ntnu.no/index.php/2017/09/08/session-in-ucsd-studio-a/#respond Fri, 08 Sep 2017 21:56:29 +0000 http://crossadaptive.hf.ntnu.no/?p=962 Continue reading "Session in UCSD Studio A"]]> This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all these performers in different constellations, some new combinations were tested this day. The approach was to explore fairly complex feature-modulator mappings. No particular focus was made on intellectualizing the details of these mappings, but rather experiencing them as a whole, “as instrument”. I had found that simple mappings, although easy to decode and understand for both performer and listener, quickly would “wear out” and become flat, boring or plainly limiting for musical development during the piece. I attempted to create some “rich” mappings, with combinations of different levels of subtlety. Some clearly audible and some subtle timbral effects. The mappings were designed with some specific musical gestures and interactions in mind, and these are listed together with the mapping details for each constellation later in this post.

During this session, we also explored the live convolver in terms of how the audio content in the IR affects the resulting creative options and performative environment for the musician playing through the effect. The liveconvolver takes are presented interspersed with the crossadaptive “feature-modulator” (one could say “proper crossadaptive”) takes. Recording of the impulse response for the convolution was triggered via an external pedal controller during performance, and we let each musician in turn have the role of IR recorder.

Participants:
Jordan Morton: double bass and voice
Miller Puckette: guitar
Steven Leffue: sax
Kyle Motl: double bass
Oeyvind Brandtsegg: crossadaptive mapping design, processing
Andrew Munsie: recording engineer

The music played was mostly free improvisations, but two of the takes with Jordan Morton was performances of her compositions. These were composed in dialogue with the system, during and in between, earlier sessions. She both plays the bass and sings, and wanted to explore how phrasing and shaping of precomposed material could be used to expressively control the timbral modulations of the effects processing.

Jordan Morton: bass and voice.

These pieces are composed by Jordan, and she has composed it with an intention of being performed freely, and shaped according to the situation at performance time, allowing the crossaptive modulations ample room for influence on the sound.

Jordan Morton
I confess
Jordan Morton I confess

“I confess” (Jordan Morton). Bass and voice.

 

Jordan Morton
Backbeat thing
Jordan Morton Backbeat thing

“Backbeat thing” (Jordan Morton). Bass and voice.

 

The effects used:
Effects on vocals: Delay, Resonant distorted lowpass
Effects on bass: Reverb, Granular tremolo

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Bass spectral flatness, and
  • Bass spectral flux: both features giving lesser reverb time on bass

Purpose: When the bass becomes more noisy, it will get less reverb

  • Vocal envelope dynamics (dynamic range), and
  • Vocal transient density: both features giving lower lowpass filter cutoff frequency on reverb on bass

Purpose: When the vocal becomes more active, the bass reverb will be less pronounced

  • Bass transient density: higher cutoff frequency (resonant distorted lowpass filter) on vocal

Purpose: to animate a distorted lo-fi effect on the vocals, according to the activity level on bass

  • Vocal mfcc-diff (formant strength, “pressed-ness”): Send level for granular tremolo on bass

Purpose: add animation and drama to the bass when the vocal becomes more energetic

  • Bass transient density: lower lowpass filter frequency for the delay on vocal

Purpose: clean up vocal delays when basse becomes more active

  • Vocal transient density: shorter delay time for the delay on vocal
  • Bass spectral flux: longer delay time for the delay on vocal

Purpose: just for animation/variation

  • Vocal dynamic range, and
  • Vocal transient density: both features giving less feedback for the delay on vocal

Purpose: clean up vocal delay for better articulation on text

 

Liveconvolver tracks Jordan/Jordan:

The tracks are improvisations. Here, Jordan’s voice was recorded as the impulse response and she played bass through the voice IR. Since she plays both instruments, this provides a unique approach to the live convolution performance situation.

Liveconvolver bass/voice
 Liveconvolver bass/voice

Liveconvolver take 1: Jordan Morton bass and voice

 

Liveconvolver bass/voice 2
 Liveconvolver bass/voice 2

Liveconvolver take 2: Jordan Morton bass and voice

 

Jordan Morton and Miller Puckette

Liveconvolver tracks Jordan/Miller:

These tracks was improvised by Jordan Morton (bass) and Miller Puckette (guitar). Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

20170511-Brandtsegg-Tk-12-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-12-Edit-A-Mix-V1

Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Miller records the IR.

20170511-Brandtsegg-Tk-14-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-14-Edit-A-Mix-V1

Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Jordan records the IR.

 

Discussion on the performance with live convolution, with Jordan Morton and  Miller Puckette.

Miller Puckette and Steven Leffue

These tracks was improvised by Miller Puckette (guitar) and Steven Leffue. The feature-modulator mapping was designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping before the first take. The intention of this strategy was to create an naturally flowing environment of exploration, with not-too-obvious relationships between instrumental gestures and resulting modulations. After the first take, some more detail of selected elements (one for each musician) of the mapping were repeated for the performers, with the anticipation that these features might be explored more consciously.

Take 1:

20170511-Brandtsegg-Tk-18-Edit-A-Mix-V1b
 20170511-Brandtsegg-Tk-18-Edit-A-Mix-V1b

Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 1.  Details of the feature-modulator mapping is given below.

Discussion 1 on the crossadaptive performance, with Miller Puckette and Steven Leffue. On the relationship between what you play and how that modulates the effects, on balance of monitoring, and other issues.

The effects used:
Effects on guitar: Spectral delay
Effects on sax: Resonant distorted lowpass, Spectral shift, Reverb

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Guitar envelope crest: longer reverb time on sax

Purpose: dynamic guitar playing will make a big room for the sax

  • Guitar transient density: higher cutoff frequency for reverb highpass filter and lower cutoff frequency for reverb lowpass filter

Purpose: when guitar is more active, the reverb on sax will be less full (less highs and less lows)

  • Guitar transient density (again): downward spectral shift on sax

Purpose: animation and variation

  • Guitar spectral flux: higher cutoff frequency (resonant distorted lowpass filter) on sax

Purpose: just for animation and variation. Note that spectral flux (especially on the guitar) will also give high values on single notes in the low register (the lowest octave), in addition to the expected behaviour of giving higher values on more noisy sounds.

  • Sax envelope crest: less delay send on guitar

Purpose: more dynamic sax playing will “dry up” the guitar delays, must play long notes to open the sending of guitar to delay

  • Sax transient density: longer delay time on guitar. This modulation mapping was also gated by the rms amplitude of the sax (so that it is only active when sax gets loud)

Purpose: load and fast sax will give more distinct repetitions (further apart) on the guitar delay

  • Sax pitch: increase spectral delay shaping of the guitar (spectral delay with different delay times for each spectral band)

Purpose: more unnatural (crazier) effect on guitar when sax goes high

  • Sax spectral flux: more feedback on guitar delay

Purpose: noisy sax playing will give more distinct repetitions (more repetitions) on the guitar delay

Take 2:

20170511-Brandtsegg-Tk-21-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-21-Edit-A-Mix-V1

Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 2. The feature-modulator mapping was the same as for take 1.

Discussion 2 on the crossadaptive performance, with Miller Puckette and Steven Leffue. Instructions and intellectualizing the mapping made it harder

Liveconvolver tracks:

Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

20170511-Brandtsegg-Tk-22-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-22-Edit-A-Mix-V1

Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Miller records the IR.

Discussion 1 on playing with the live convolver, with Miller Puckette and Steven Leffue.

20170511-Brandtsegg-Tk-23-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-23-Edit-A-Mix-V1

Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Steven records the IR.

Discussion 2 on playing with the live convolver, with Miller Puckette and Steven Leffue.

 

Steven Leffue and Kyle Motl

Two different feature-modulator mappings was used, and we present one take of each mapping.  Like the mappings used for Miller/Steven, these were designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping. The mapping used for the first take closely resembles the mapping for Steven/Miller, with just a few changes to accomodate for the different musical context and how the analysis methods responds to the instruments.

  • Bass transient density: shorter reverb time on sax
  • The reverb equalization (highpass and lowpass was skipped
  • Bass envelope crest: increase send level for granular processing on sax
  • Bass rms amplitude: Parametric morph between granular tremolo and granular time stretch on sax

 

Crossdaptive take 1 Steven / Kyle
 Crossdaptive take 1 Steven / Kyle

 

In the first crossadaptive take in this duo, Kyle commented that the amount of delay made it hard to play, that any fast phrases just would turn into a mush. It seemed the choice of effects and the modulations was not optimal, so we tried another configuration of effects (and thus another mapping of features to modulators)

 

Crossadaptive take 2 Steven / Kyle
 Crossadaptive take 2 Steven / Kyle

 

This mapping had earlier been used for duo playing between Kyle (bass) and Øyvind (vocal) on several occations, and it was merely adjusted to accomodate for the different timbral dynamics of the saxophone. In this way, Kyle was familiar with the possibilities of the mapping, but not with the context in which it would be used.
The granular processing done on both instrument was done with the Hadron Particle Synthesizer, which allows a multidimensional parameter navigation through a relatively simple modulation interface (X, Y and 4 expression controllers). The specifics of the actual modulation routing and mapping within Hadron can be described, but it was thought that in the context of the current report, further technical detail would only take away from the clarity of the presentation. Even though the details of the parameter mapping was designed deliberately, at this point in the performative approach to playing with it, we just did no longer pay attention to technical specifics. Rather, the focus was on letting go and trying to experience the timbral changes rather than intellectualizing them.

The effects used:
Effects on sax: Delay, granular processing
Effects on bass: Reverb, granular processing

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Sax envelope crest: shorter reverb time on bass
  • Sax rms amp: higher cutoff frequency for reverb highpass filter

Purpose: louder sax will make the bass reverb thinner

  • Sax transient density: lower cutoff frequency for reverb lowpass filter
  • Sax envelope dynamics (dynamic range): higher cutoff frequency for reverb lowpass filter

Purpose: faster sax playing will make the reverb less prominent, but more dynamic playing will enhance it

  • Sax spectral flux: Granular processing state morph (Hadron X-axis) on bass
  • Sax envelope dynamics: Granular processing state morph (Hadron Y-axis) on bass
  • Sax rms amplitude: Granular processing state morph (Hadron Y-axis) on bass

Purpose: animation and variation

  • Bass spectral flatness: higher cutoff frequency of the delay feedback path on sax
    Purpose: more noisy bass playing will enhance delayed repetitions
  • Bass envelope dynamics: less delay feedback on sax
    Purpose: more dynamic playing will give less repetitions in delay on sax
  • Bass pitch: upward spectral shift on sax

Purpose: animation and variation, pulling in same direction (up pitch equals shift up)

  • Bass transient density: Granular process expression 1 (Hadron) on sax
  • Bass rms amplitude: Granular process expression 2 & 3 (Hadron) on sax
  • Bass rhythmic irregularity: Granular process expression 4 (Hadron) on sax
  • Bass MFCC diff: Granular processing state morph (Hadron X-axis) on sax
  • Bass envelope crest: Granular processing state morph (Hadron Y-axis) on sax

Purpose: multidimensional and rich animation and variation


On the second crossadaptive take between Steven and Kyle, I asked: “Does this hinder interaction or does or make something interesting happen?”
Kyle says it hinders the way they would normally play together. “We can’t go to our normal thing because there’s a third party, the mediation in between us. It is another thing to consider.” Also, the balance between the acoustic sound and the processing is difficult. This is even more difficult when playing with headphones, as the dynamic range and response is different. Sometimes the processing will seem very quiet in relation to the acoustic sound of the instruments, and at other times it will be too loud.
Steven says at one point he started not paying attention to the processing and focused mostly on what Kyle was doing. “Just letting the processing be the reaction to that, not treating it as an equal third party. … Totally paying attention to what the other musician is doing and just keeping up with him, not listening to myself.” This also mirrors the usual options of improvisational listening strategy and focus, of listening to the whole or focusing on specific elements in the resulting sound image.

Longer reflective conversation between Steven Leffule, Kyle Motl and Øyvind Brandtsegg. Done after the crossadaptive feature-modulator takes, touching on some of the problems encountered, but also reflecting on the wider context of different kinds of music accompaniment systems.

Liveconvolver tracks:

Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

 

Liveconvolver take 1 Steven / Kyle
 Liveconvolver take 1 Steven / Kyle

 

Liveconvolver take 2 Steven / Kyle
 Liveconvolver take 2 Steven / Kyle

 

Discussion 1 on playing with the live convolver, with Steven Leffue and Kyle Motl.

Discussion 2 on playing with the live convolver, with Steven Leffue and Kyle Motl.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/09/08/session-in-ucsd-studio-a/feed/ 0 962
Session with Jordan Morton and Miller Puckette, April 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/09/session-with-jordan-morton-and-miller-puckette-april-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/09/session-with-jordan-morton-and-miller-puckette-april-2017/#respond Fri, 09 Jun 2017 20:55:58 +0000 http://crossadaptive.hf.ntnu.no/?p=941 Continue reading "Session with Jordan Morton and Miller Puckette, April 2017"]]> This session was conducted as part of preparations to the larger session in UCSD Studio A, and we worked on calibration of the analysis methods to Jordans double bass and vocals. Some of the calibration and accomodation of signals also includes the fun creative work of figuring out which effects and which effect parameters to map the analyses to. The session resulted in some new discoveries in this respect, for example using the spectral flux of the bass to control vocal reverb size, and using transient density to control very low range delay time modulations. Among the issues we discussed were aspects of timbral polyphony, i.e. how many simultaneous modulations can we percieve and follow?

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/09/session-with-jordan-morton-and-miller-puckette-april-2017/feed/ 0 941
Liveconvolver experiences, San Diego http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/#respond Wed, 07 Jun 2017 20:25:55 +0000 http://crossadaptive.hf.ntnu.no/?p=868 Continue reading "Liveconvolver experiences, San Diego"]]> The liveconvolver has been used in several concerts and sessions in San Diego this spring. I played three concerts with the group Phantom Station (The Loft, Jan 30th, Feb 27th and Mar 27th), where the first involved the liveconvolver. Then one concert with the band Creatures (The Loft, April 11th), where the live convolver was used with contact mikes on the drums, and live sampling IR from the vocals. I also played a duo concert with Kjell Nordeson at Bread and Salt April 13th, where the liveconvolver was used with contact mikes on the drums, live sampling IR from my own vocals. Then a duo concert with Kyle Motl at Coaxial in L.A (April 21st), where a combination of crossadaptive modulation and live convolver was used. For the duo with Kyle, I switched between using bass as the IR and vocals as the IR, letting the other instrument play through the convolver. A number of studio sessions was also conducted, with Kjell Nordeson, Kyle Motl, Jordan Morton, Miller Puckette, Mark Dresser, and Steven Leffue. A separate report on the studio sesssion in UCSD Studio A will be published later.

“Phantom Station”, The Loft, SD

This group is based on Butch Morris’ conduction language for improvisation, and the performance typically requires a specific action (specific although it is free and open) to happen on cue from the conductor. I was invited into this ensemble and encouraged to use whatever part of my instrumentarium that I might see fit. Since I had just finished the liveconvoolver plugin, I wanted to try that out. I also figured my live processing techniques would fit easily, in case the liveconvolver did not work so well. Both the live processing and the live convolution instruments was in practice less than optimal for this kind of ensemble playing. Even though the instrumental response can be fast (low latency), the way I normally use these instruments is not for making a musical statements quickly for one second and then suddenly stop again. This leads me to reflect on a quality measure I haven’t really thought of before. For lack of a better word, let’s call it reactive inertia: the possibility to completely change direction on the basis of some external and unexpected signal. This is something else than the audio latency (of the audio processing) and also something else than the user interface latency (like for example, the time it takes the performer to figure out which button to turn to achieve a desired effect). I think it has to do with the sound production process, for example how some effects take time to build up before they are heard as a distinct musical statement, and also the inertia due to interaction between humans and also the signal chain of sound production pre effects (say if you live sample or live process someone, need to get a sample, or need to get some exciter signal). For live interaction instruments, the reactive inertia is then goverened by the time it takes two performers to react to the external stimuli, and their combined efforts to be turned into sound by the technology involved. Much like what an old man once told me at Ocean Beach:

“There’s two things that needs to be ready for you to catch a wave
– You, …and the wave”.

We can of course prepare for sudden shifts in the music, and construct instruments that will be able to produce sudden shifts and fast reactions. Still, the reaction to a completely unexpected or unfamiliar stimuli will be slower than optimal. An acoustic instrument has less of these limitations. For this reason, I switched to using the Marimba Lumina for the remaning two concerts with Phantom Station, to be able to shape immediate short musical statements with more ease.

Phantom Station

“Creatures”, The Loft, SD

Creatures is the duo of Jordan Morton (double bass, vocals) and Kai Basanta (drums). I had the pleasure of sitting in with them, making it a trio for this concert at The Loft. Creatures have some composed material in the form of songs, and combine this with long improvised stretches. For this concert I got to explore the liveconvolver quite a bit, in addition to the regular live processing and Marimba Lumina. The convolver was used with input from piezo pickups on the drums, convolving with IR live recorded from vocals. Piezo pickups can be very “impulse-like”, especially when used on percussive instruments. The pickups’ response have a generous amount of high frequencies, and a very high dynamic range. Due to the peaky impulse-like nature of the signal, it drives the convolution almost like a sample playback trigger, creating delay patterns on the input sound. Still the convolver output can become sustained and dense, when there is high activity on the triggering input. In the live mix, the result sounds somewhat similar to infinite reverb or “freeze” effects (using a trigger to capture a timbral snippet and holding that sound as long as the trigger is enabled). Here, the capture would be the IR recording, and the trigger to create and sustain the output is the activity on the piezo pickup. The causality and performer interface is very different than that of a freeze effect, but listening to it from the outside, the result is similar. These expressive limitations can be circumvented by changing the miking technique, and working in a more directed way as to what sounds goes into the convolver. Due to the relatively few control parameters, the main thing deciding how the convolver sounds is the input signals. The term causality in this context was used by Miller Puckette when talking about the relationship between performative actions and instrumental reactions.

Creatures + Brandtsegg
CreaturesTheLoft_mix1_mstr
 CreaturesTheLoft_mix1_mstr

Creatures at The Loft. A liveconvolver example can be found at 29:00 to 34:00 with Vocal IR, and briefly around 35:30 with IR from double bass.

“Nordeson/Brandtsegg duo”, Bread & Salt, SD

Duo configuration, where Kjell plays drums/perc and vibraphone, and I did live convolution, live processing and Marimba Lumina. My techniques was much like what I used with Creatures. The live convolver setup was also similar, with IR being live sampled from my vocals and the convolver being triggered by piezo pickups on Kjell’s drums. I had the opportunity to work over a longer period of time preparing for this concert together with Kjell. Because if this, we managed to develop a somewhat more nuanced utilization of the convolver techniques. Still, in the live performance situation on a PA, the technical situation made it a bit more difficult to utilize the fine grained control over the process and I felt the sounding result was similar in function to what I did together with Creatures. It works well like this, but there is potential for getting a lot more variation out of this technique.

Nordeson and Brandtsegg setup at Bread and Salt

We used a quadrophonic PA setup for this concert. Due to an error with the front-of-house patching, only 2 of the 4 lines from my electronics was recorded. Due to this fact, the mix is somewhat off balance. The recording also lacks first part of the concert, starting some 25 minutes into it.

NordesonBrandtsegg_mix1_mstr
 NordesonBrandtsegg_mix1_mstr

“The Gringo and the Desert”, Coaxial, LA

In this duo Kyle Motl plays double bass and I do vocals, live convolution, live processing, and also crossadaptive processing. I did not use the Marimba Lumina in this setting, so some more focus was allowed for the processing. In terms of crossadaptive processing, the utilization of the techniques is a bit more developed in this configuration. We’ve had the opportunity to work over several months, with dedicated rehearsal sessions focusing on separate aspects of the techniques we wanted to explore. As it happpened during the concert, we played one long set and the different techniques was enabled as needed. Parameters that was manually controlled in parts of the set, was then delegated to crossadaptive modulations in other parts of the set. The live convolver was used freely as one out of several active live processing modules/voices. The liveconvolver with vocal IR can be heard for example from 16:25 to 20:10. Here, the IR is recorded from vocals, and the process acts as a vocal “shade” or “pad”, creating long sustained sheets of vocal sound triggeered by the double bass. Then, liveconvolver with bass IR from 20:10 to 23:15, where we switch on to full crossadaptive modulation until the end of the set. We used a complex mapping designed to respond to a variety of expressive changes. Our attitude/approach as performers was not to intellectually focus on controlling specific dimensions but to allow the adaptive processing to naturally follow whatever happened in the music.

Gringo and the Desert soundcheck at Coaxial, L.A

coaxial_Kyle_Oeyvind_mix2_mstr
 coaxial_Kyle_Oeyvind_mix2_mstr

Gringo and the Desert at Coaxial DTLA, …yes the backgorund noise is the crickets outside.

Session with Steven Leffue (Apr 28th, May 5th)

I did two rehearsal sessions together with Steven Leffue in April, as preparation for the UCSD Studio A session in May. We worked both on crossadaptive modulations and on live convolution. Especially interesting with Steven is his own use of adaptive and crossadaptive techniques. He has developed a setup in PD, where he tracks transient density and amplitude envelope over different time windows, and also uses standard deviation of transient density within these windows. The windowing and statistics he use can act somewhat like a feature we have also discussed in our crossadaptive project: a method of making an analysis “in relation to the normal level” for a given feature. Thus, a way to track relative change. Steven’s Master thesis “Musical Considerations in Interactive Design for Performance” relates to this and other issues of adaptive live performance. Notable is also his ICMC paper “AIIS: An Intelligent Improvisational System”. His music can be heard at http://www.stevenleffue.com/music.html, where the adaptive electronics is featured in “A theory of harmony” and “Futures”.
Our first session was mainly devoted to testing and calibrating the analysis methods towards use on the saxophone. In very broad terms, we notice that the different analysis streams now seem to work relatively similar on different instruments. The main diffferences are related to extraction of tone/noise balance, general brightness, timbral “pressedness” (weight of formants), and to some extent in transient detection and pitch tracking. The reason why the analysis methods now appear more robust is partly due to refinements in their implementation, and partly due to (more) experience in using them as modulators. Listening, experimentation, tweaking, and plainly just a lot of spending-time-with-them, have made for a more intuitive understanding of how each analysis dimension relates to an audio signal.
The second session was spent exploring live convolution between Sax and Vocals. Of particular interest here is the comments from Steven regarding the performative roles of IR recording vs playing the convolver. Steven states quite strongly that the one recording the IR has the most influence over the resulting music. This impression is consistent when he records the IR (and I sing through it), and when I record the IR and he plays through it. This may be caused by several things, but of special interest is that it is diametrically opposed to what many other performers have stated. Both Kyle, Jordan and Kjell in our initial sessions, voiced a higher performative intimacy, a closer connection to the output when playing through the IR. Maybe Steven is more concerned with the resulting timbre (including processed sound) than the physical control mechanism, as he routinely designs and performs with his own interactive electronics rig. Of course all musician care about the sound, but perhaps there is a difference of approach on just how to get there. With the liveconvolver we put the performers in an unfamiliar situation, and the differences in approach might just show different methods of problems solving to gain control over this situation. What I’m trying to investigate is how the liveconvolver technique works performatively, and in this, the performer’s personal and musical traits plays into the situation quite strongly. Again, we can only observe single occurences and try to extract things that might work well. There is no conclusions to be drawn on a general basis as to what works and what does not, and neither can we conclude what is the nature of this situation and this tool. One way of looking at it (I’m still just guessing) is that Steven treats the convolver as *the environment* in which music can be made. The changes to the environment determines what can be played and how that will sound, and thus, the one recording the IR controls the environment and subsequently controls the most important factor in determining the music.
In this session, we also experimented a bit with transposed and revesed IR, this being some of the parametric modifications we can make to the IR with our liveconvolver technique. Transposing can be interesting, but also quite difficult to use musically. Transposing in octave intervals can work nicely, as it will act just as much as a timbral colouring without changing pitch class. A fun fact about reversed IR as used by Steven: If he played in the style of Charlie Parker and we reversed the IR, it would sound like Evan Parker. Then, if he played like Evan Parker and we reversed the IR, it would still sound like Evan Parker. One could say this puts Evan Parker at the top of the convolution-evolutionary-saxophone tree….

Steven Leffue

2017_05_StevenOyvLiveconv_VocIR_mix3_mstr
 2017_05_StevenOyvLiveconv_VocIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, IR recorded by vocals.

2017_05_StevenOyvLiveconv_SaxIR_mix3_mstr
 2017_05_StevenOyvLiveconv_SaxIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, IR recorded by Sax.

2017_05_StevenOyvLiveconv_reverseSaxIR_mix3_mstr
 2017_05_StevenOyvLiveconv_reverseSaxIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, time reversed IR recorded by Sax.

 

Session with Miller Puckette, May 8th

The session was intended as “calibration run”, to see how the analysis methods responded to Miller’s guitar. This as a preparation for the upcoming bigger session in UCSD Studio A. The main objective was to determine which analysis features would work best as expressive dimensions, find the appropriate ranges, and start looking at potentially useful mappings. After this, we went on to explore the liveconvolver with vocals and guitar as the input signals. Due to the “calibration run” mode of approach, the session was not videotaped. Our comments and discussion was only dimly picked up by the microphones used for processing. Here’s a transcription of some of Millers initial comments on playing with the convolver:

“It is interesting, that …you can control aspects of it but never really control the thing. The person who’s doing the recording is a little bit less on the hook. Because there’s always more of a delay between when you make something and when you hear it coming out [when recording the IR]. The person who is triggering the result is really much more exposed, because that person is in control of the timing. Even though the other person is of course in control of the sonic material and the interior rhythms that happen.”

Since the liveconvolver has been developed and investigated as part of the research on crossadaptive techniques, I had slipped into the habit of calling it a crossadaptive technique. In discussion with Miller, he pointed out that the liveconvolver is not really *crossadaptive* as such. BUT it involves some of the same performative challenges, namely playing something that is not played solely for the purpose of it’s own musical value. The performers will sometimes need to play something that will affect the sound of the other musician in some way. One of the challenges is how to incorporate that thing into the musical narrative, taking care of how it sounds in itself, and exactly how it will affect the other performer’s sound. Playing with liveconvolver has this performative challenge, as has the regular crossadaptive modulation. One thing the live convolver does not have is the reciprocal/two-way modulation, it is more of a one-way process. The recent Oslo session on liveconvolution used two liveconvolvers simultaneously to re-introduce the two-way reciprocal dependency.

Miller Puckette

2017_05_liveconv_OyvMiller1_M_IR
 2017_05_liveconv_OyvMiller1_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller2_M_IR
 2017_05_liveconv_OyvMiller2_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller3_M_IR
 2017_05_liveconv_OyvMiller3_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller4_O_IR
 2017_05_liveconv_OyvMiller4_O_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by vocals.

2017_05_liveconv_OyvMiller5_O_IR
 2017_05_liveconv_OyvMiller5_O_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by vocals.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/feed/ 0 868
Live convolution session in Oslo, March 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/#comments Wed, 07 Jun 2017 13:28:06 +0000 http://crossadaptive.hf.ntnu.no/?p=902 Continue reading "Live convolution session in Oslo, March 2017"]]> Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation).

The focus for this session was to work with the new live convolver in Ableton Live

Setup – getting to know the Convolver

We worked in duo configurations – flute/guitar, guitar/vocal and vocal/flute

We started with spending some time exploring and understanding the controls. Our first setup was guitar/flute and we chose to start convolving in auto mode. We knew both from experience with convolution in general and from previous live convolver session reports, that sustained and percussive sounds would yield very different results and we therefore started with combinations: percussive sounds (flute) with sustained (guitar). While this made it quite clear how the convolver worked, the output was less than impressive. Next step was to switch the inputs while preserving the playing technique. Still everything seemed to sound somewhat delayed with ringing overtones. It was suggested to add in some dry signal to produce more aesthetically pleasing sounds, but at this point we decided to only listen to the wet signal, as the main goal was to explore and understand the ways of the convolver.

Sonics of convolution

We continued the process by switching from auto to triggered mode where the flute had the role of the IR to make the convolver a bit more responsive. This produced a few nice moments, but the overall result was still quite “mushy”. We explored reducing the IR size and working with IR pitch to explore ways of getting other sound qualities.

We subsequently switched from flute to vocal, working with guitar and voice in trigger mode where voice was used as IR. We also decided to add in some dry sound from the vocal, since the listeners (Mats and Bjørnar) found it much more interesting when we could hear the relation between the IR and the convolved signal. Since the convolver still felt quite slow in response and muddy in sound, we also tried out very short IR size in auto mode

As shown in this video, we tried to find out how the vocal sound could affect the processing of the flute. The experience was either that the got some strange reverberation or some strange eq- we thought that the sound that came out of the processed flute was not so interesting as we had hoped!

Trying again – doesn’t feel like we get any interesting sounds. Is it the vocal input that doesn’t give the flute sound anything to work with? The flute sound we get out is still a bit harsh in the higher frequencies and with a bit of strange reverberation.

Critical listening

All these initial tests were more or less didactic, in that we chose fixed materials (such as percussive versus sustained) in order to emphasise the effect of the convolver. After three short sessions this became a limitation that hindered improvisational flow and phrasing. Especially in the flute/vocal session this was an issue. Too often, the sonic result of the convolution was less than intriguing. We discussed whether the live convolver would lend itself more easily to composed situations, as the necessity of carefully selecting material types that would convolve in an interesting way rendered it less usable in improvised settings. We decided to add more control to the setup in order to remedy this problem.

Adding control

Gyrid commented that she felt the system was quite limiting, especially when you are used to control live processing yourself. To remedy this, we started adding parameter controls for the performers. In a session with vocal and guitar, we added a expression pedal for Bernt Isak (guitar) to control the IR size. This was the first time we had the experience of a responsive system that made musical sense.

Revised setup

After some frustrating, and from our perception, failed attempts at getting interesting musical results, we decided to revise our setup. After some discussion we came to the conclusion that our existing setup, using one live convolver, was cross adaptive in the signal control domain, but didn’t feel cross adaptive in the musical domain. Therefore we decided to add another crossing by using two live convolvers, where each instrument had the role as IR in one convolver and as input in the other convolver. We also decided to set one convolver to auto mode and the other to trigger mode for better separation and more variation in musical output.

Guitar

  • Fader 1: IR size
  • Fader 2: IR auto update-rate
  • Fader 3: Dry sound
  • Expression pedal 1: IR pitch
  • Toggle switch: switch inputs

Vocal

  • Fader 1: IR size
  • Fader 2: IR pitch

Flute

  • Fader 1: IR size
  • Fader 2: IR pitch
  • Fader 3: Dry sound

Convolvers were placed on return tracks – both panned slightly to the sides to easier distinguish the two convolvers, while also adding some stereo width.

Sound excerpt 1- flute & vocal, using two convolvers:

Bjørnar has the same setup as Bernt Isak. Better experience. It could change the way the convolver use the signal if we use different microphones – maybe one inside and one outside the flute.

It’s quite apparent to us that using sustained sounds doesn’t work very well. It seems to us that the effect just makes the flute sound less interesting, it somehow reduces the bandwidth, amplifies the resonant frequencies  or just make some strange phasing. The soundscape is changing and gets more interesting when we shift to more percussive and distorted sound qualities. Could it be an idea to make a it possible to extract only the distorted parts of a sound input?

Sound excerpt 2- guitar & vocal, using two convolvers:

Session with guitar and vocal where we control the IR size, IR pitch and IR rate.

Gyrid has the input signal in trigger mode and can control IR size and pitch with faders. Bernt Isak  has the input signal in auto mode –  and can control amount of dry signal,  IR size and rate with faders and  pitch with an expression pedal. Very positive experience to use two convolvers! Even though one convolver is cross adaptive in the sense that it uses two signals, it didn’t feel cross adaptive musically, but more like a traditional live processing setup. We also found that having one convolver in trigger mode and one in auto mode was a good way of adding movement and variation in the music as one convolver would keep a more steady “timing”, while the other one can be completely free. It also seems essential to have the possibility to control the dry signal – hearing the dry signal makes the music more three dimensional.

Sound excerpt 3- flute & guitar, using two convolvers:

Session with guitar and flute – Bjørnar has the same setup as Gyrid, but with added control for amount of dry sound. Same issue with flute microphone as above

The experience is very different with flute and vocals and guitar and vocals, this has mainly to do with the way the instruments are played. The guitar has a very distinct attack and it is very clear when the timbral character change. Flute and vocals have a more similar frequency response and the result get less interesting. Adding more effects to the guitar (distortion + tremolo) makes a huge difference- but also the fact that percussive sounds from vocal gives the most interesting musical output.

 

Overall session reflections 

Choice of instrument combinations is crucial for live convolving to be controllable and produce artistically interesting results. We also noted that there is a difference between signal cross adaptiveness and musical cross adaptiveness. From our experience, double live convolving is needed to produce a similar feel of musical cross adaptiveness as we’ve experienced from the previous signal processing cross adaptive sessions:

Ideas for further development

Some ideas for adding more control – the possibility to switch between auto mode and trigger mode could be interesting. It would also be useful with a visual indicator for the IR trigger mode for easier tuning of the trigger settings.

Bjørnar suggested to combine the live convolver with the analyzer/midimapper in order to only convolve when there is a minimum of noise and/or transients available. E.g. linking spectral crest to a wet/dry parameter in the convolution or splitting the audio signal based on spectral conditions.

It could perhaps also yield some interesting result to add some spectral processing to reduce the fundamental frequency component (similar to Øyvind Brandtseggs feedback tools) for instruments that have a very strong fundamental (like flute).

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/feed/ 1 902
Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/#comments Wed, 07 Jun 2017 12:47:40 +0000 http://crossadaptive.hf.ntnu.no/?p=759 Continue reading "Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017"]]> Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice)

The focus for this session was to play with, fine tune and work further on the mappings we sat up during the last session at NMH in November. Due to practical reasons, we had to split the session into two half days on 13th and 19th of January

13th of January 2017

We started by analysing 4 different musical gestures for the guitar, which was skipped due to time constraints during the last session. During this analysis we found the need to specify the spread of the analysis results in addition to the region. This way we could differentiate the analysis results in terms of stability and conclusiveness. We decided to analyse the flute and vocal again to add the new parameters.

19th of January 2017

After the analysis was done, we started working on a mapping scheme which involved all 3 instruments, so that we could play in a trio setup. The mappings between flute and vocal where the same as in the November session

The analyser was still run in Reaper, but all routing, effects chain and mapping (MIDIator) was now done in Live. Because of software instability (the old Reaper projects from November wouldn’t open) and change of DAW from Reaper to Live, meant that we had to set up and tune everything from scratch.

Sound examples with comments and immediate reflections

1. Guitar & Vocal – First duo test, not ideal, forgot to mute analyser.

2. Guitar & Vocal retake – Listened back on speakers after recording. Nice sounding. Promising.

Reflection: There seems to be some elements missing, in a good way, meaning that there is space left for things to happen in the trio format. There is still need for fine-tuning of the relationship between guitar and vocal. This scenario stems from the mapping being done mainly with the trio format in mind.

3. Vocals & flute – Listened back on speakers after recording.

Reflections: dynamic soundscape, quite diverse results, some of the same situations as with take 2, the sounds feel complementary to something else. Effect tuning: more subtle ring mod (good!) compared to last session, the filter on vocals is a bit too heavy-handed. Should we flip the vocal filter? This could prevent filtering and reverb taking place simultaneously. Concern: is the guitar/vocal relationship weaker compared to vocal/flute? Another idea comes up – should we look at connecting gates or bypasses in order to create dynamic transitions between dry and processed signals?

4.Flute & Guitar

Reflections: both the flute ring mod and git delay are a bit on the heavy side, not responsive enough. Interesting how the effect transformations affect material choices when improvising.

5.Trio

Comments and reflections after the recording session

It is interesting to be in a situation where you, as you play, are having multi-layered focuses- playing, listening, thinking of how you affect the processing of your fellow musicians and how your sound is affected and trying to make something worth listening to. Of course we are now in an “etyde- mode”, but still striving for the goal, great output!

It seems to be a bug in the analyser tool when it comes to being consistent. Sometimes some parameters fall out. We experienced that it seems to be a good idea to run the analyse a couple of times for each sound to get the most precise result.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/feed/ 2 759
Cross adaptive session with 1st year jazz students, NTNU, March 7-8 http://crossadaptive.hf.ntnu.no/index.php/2017/04/06/cross-adaptive-session-with-1st-year-jazz-students-ntnu-march-7-8/ http://crossadaptive.hf.ntnu.no/index.php/2017/04/06/cross-adaptive-session-with-1st-year-jazz-students-ntnu-march-7-8/#comments Thu, 06 Apr 2017 13:46:21 +0000 http://crossadaptive.hf.ntnu.no/?p=784 Continue reading "Cross adaptive session with 1st year jazz students, NTNU, March 7-8"]]> This is a description of a session with first year jazz students at NTNU recorded March 7 and 8. The session was organized as part of the ensemble teaching that is given to jazz students at NTNU, and was meant to take care of both the learning outcomes from the normal ensemble teaching, and also aspects related to the cross adaptive project.

Musicians:

Håvard Aufles, Thea Ellingsen Grant, Erlend Vangen Kongstorp, Rino Sivathas, Øyvind Frøberg Mathisen, Jonas Enroth, Phillip Edwards Granly, Malin Dahl Ødegård and Mona Thu Ho Krogstad.

Processing musician:

Trond Engum

Video documentation:

Andreas Bergsland

Sound technician:
Thomas Henriksen

 

Video digest from the session:

Preparation:

Based on our earlier experiences with bleeding between microphones we located instruments in separate rooms. Since there was quit a big group of different performers it was important that changing set-up took as little time as possible. There was also prepared a system set-up beforehand based on the instruments in use. To gain an understanding of the project from the performer side as early in the process as possible we used the same four step chronology when introducing the performers to the set-up.

  1. Start with individual instruments trying different effects through live processing and decide together with the performers what effects most suitable to add to their instrument.
  2. Introducing the analyser and decide, based on input form the performers, which methods best suited for controlling different effects from their instrument.
  3. Introducing adaptive processing were one performer is controlling the effects on the other, and then repeat vice versa.
  4. Introducing cross-adaptive processing were all previous choices and mappings are opened up for both performers.

 

Session report:

Day 1. Tuesday 7th March

Trumpet and drums

Sound example 1: (Step 1) Trumpet live processed with two different effects, convolution (impulse response from water) and overdrive.

 

The performer was satisfied with the chosen effects, also because the two were quite different in sound quality. The overdrive was experienced as nice, but he would not like to have it present all the time. We decided to save these effects for later use on trumpet, and be aware of dynamic control on the overdrive.

 

Sound example 2: (Step 1) Drums live processed with dynamically changing delay and a pitch shift 2 octaves down. The performer found the chosen effects interesting, and the mapping was saved for later use.

 

Sound example 3: (Step 1) Before entering the analyser and adaptive processing we wanted to try playing together with the effects we had chosen to see if they blended well together. The trumpet player had some problems with hearing the drums during the performance, felt as they were a bit in the background. We found out that the direct sound of the drums was a bit low in the mix, and this was adjusted. We discussed that it is possible to make the direct sound of both instruments louder or softer depending what the performer wants to achieve.

 

Sound example 4. (Step 2/3) For this example we entered into the analyser using transient density on drums. This was tried out by showing the analyser at the same time as doing an accelerando on drums. This was then set up as an adaptive control from drums on the trumpet. For control, the trumpet player had a suggestion that the more transient density the less convolution effect was added to the trumpet (less send to a convolution effect with a recording of water). The reason for this was that it could make more sense to have more water on slow ambient parts than on the faster hectic parts. At the same time he suggested that the opposite should happen when adding overdrive to the trumpet by transient density meaning that the more transient density the more overdrive on the trumpet. During the first take a reverb was added to the overdrive in order to blend the sound more into the production. It felt like the dynamical control over the effects was a bit difficult because the water disappeared to easily, and the overdrive was introduced to easily. We agreed to fine-tune the dynamical control before doing the actual test that is present as sound example 4.

 

Sound example 5: For this example we changed roles and enabled the trumpet to control the drums (adaptive processing). We followed a suggestion from the trumpet player and used pitch as an analyses parameter. We decided to use this to control the delay effect on the drums. Low notes produced long gaps between delays, whereas high notes produced small gap between delays. This was maybe not the best solution for getting good dynamical control, but we decide to keep this anyway.

 

Sound example 6: Cross adaptive performance using the effects and control mappings introduced in example 4 and 5. This was a nice experience for the musicians. Even though it still felt a bit difficult to control it was experienced as musical meaningful. Drummer: “Nice to play a steady grove, and listen to how the trumpet changed the sound of my instrument”.

 

Vocals and piano

Sound example 7: We had now changed the instrumentation over to vocals and piano, and we started with a performance doing live processing on both instruments. The vocals were processed using two different effects using a delay, and convolution through a recording of small metal parts. The piano was processed using an overdrive and convolution through water.

 

Sound example 8: Cross adaptive performance where the piano was analysed by rhythmical consonance controlling the delay effect on vocals. The vocal was analysed by transient density controlling the convolution effect on the piano. Both musicians found this difficult, but musically meaningful. Sometimes the control aspect was experienced as counterintuitive to the musical intention. Pianist: It felt like there was a 3rd musician present.

 

Saxophone self-adaptive processing

Sound example 9: We started with a performance doing live processing to familiarize the performer with the effects. The performer found the augmentation of extended techniques as clicks and pops interesting since this magnified “small” sounds.

 

Sound example 10: Self-adaptive processing performances where the saxophone was analysed by transient density and then used to control two different convolution effects (recording of metal parts and recording of a cymbal). The first one resulting in a delay effect the second as a reverb. The higher transient density in the analyses the more delay and less reverb and vice versa. The performer experienced the quality of the effects quit similar so we removed the delay effect.

 

Sound example 11: Self-adaptive processing performances using the same set-up but changing the delay effect to overdrive. The use of overdrive on saxophone did not bring anything new to the table the way it was set up since the acoustic sound of the instrument could sound similar to the effect when putting in strong energy.

 

Day 2. Wednesday 8th March

 

Saxophone and piano

Sound example 12: Performance with saxophone and live processing, familiarizing the performer with the different effects and then choose which of the effects to bring further into the session. Performer found this interesting and wanted to continue with reverb ideas.

 

Sound example 13: Performance with piano and live processing. The performer especially liked the last part with the delays – Saxophonist: “It was like listening to the sound under water (convolution with water) sometimes, and sometimes like listening to an old radio (overdrive)”. Piano wanted to keep the effects that were introduced.

 

Sound example 14: Adaptive processing, controlling delay on saxophone from the piano by using analyses of the transient density. The higher transient density, the larger gap between delays on the saxophone. The saxophone player found it difficult to interact since the piano had a clean sound during performance. The piano on the other hand felt in control over the effect that was added.

 

Sound example 15: Adaptive processing using saxophone to control piano. We analyzed the rhythmical consonance on saxophone. The higher degree of consonance, the more convolution effect (water) was added to piano and vice versa. Saxophone didn’t feel in control during performance, and guessed it was due to not holding a steady rhythm over a longer period. The direct sound of the piano was also a bit loud in the mix making the added effect a bit low in the mix. Piano felt that saxophone was in control, but agreed to the point that the analyses was not able to read to the limit because of the lack of a steady rhythm over a longer time period.

 

Sound example 16: Crossadptive performance using the same set-up as in example 14 and 15. Both performers felt in control, and started to explore more of the possibilities. Interesting point when the saxophone stops to play since the rhythmical consonance analyses will make a drop as soon as it starts to read again. This could result in strong musical statements.

 

Sound example 17: Crossadaptive performance keeping the same setting but adding rms analyses on the saxophone to control a delay on the piano (the higher rms the less delay and vice versa).

 

Vocals and electric guitar

Sound example 18: Performance with vocals and live processing. Vocalist: “It is fun, but something you need to get use to, needs a lot of time”.

 

Sound example 19: Performance with Guitar and live processing. Guitarist: “Adapted to the effects, my direct sound probably sounds terrible, feel that I`m loosing my touch, but feels complementary and a nice experience”.

 

Sound example 20: Performance with adaptive processing. Analyzing the guitar using rms and transient density. The higher transient density the more delay added to the vocal, and higher rms the less reverb added to the vocal. Guitar: I feel like a remote controller and it is hard to focus on what I play sometimes. Vocalist: “Feels like a two dimensional way of playing”.

 

Sound example 21: Performance with adaptive processing. Controlling the guitar by vocals. Analyzing the rhythmical consonance on the vocal to control the time gap between delays inserted on the guitar. Higher rhythmical consonance results in larger gaps and vice versa. The transient density on vocal controls the amount of pitch shift added to the guitar. The higher transient density the less volume is sent to the pitch shift.

 

Sound example 22: Performance with cross adaptive processing using the same settings as in sound example 20 and 21.

Vocalist: “It is another way of making music, I think”. Guitarist: “I feel control and I feel my impact, but musical intention really doesn’t fit with what is happening – which is an interesting parameter. Changing so much with doing so little is cool”.

 

Observation and reflections

The sessions has now come to a point were there is less time used on setting up and figuring out how the functionality in the software works, and more time used on actual testing. This is an important step taking in consideration working with musicians that are introduced to the concept the first time. A good stability in software and separation between microphones makes the workflow much more effective. It still took some time to set up everything the first day due to two system crashes, the first one related to the midiator, the second one related to video streaming.

 

Since preparing the system beforehand there was a lot of reuse both concerning analyzing methods and the choice of effects. Even though there were a lot of reuse on the technical side the performances and results has a large variety in expressions. Even though this is not surprising we think it is an important aspect to be reminded of during the project.

 

Another technical workaround that was discussed concerning the analyzing stage was the possibility to operate with two different microphones on the same instrument. The idea is then to use one for reading analyses, and one for capturing the “total” sound of the instrument for use in processing. This will of course depend on which analyzing parameter in use, but will surely help for a more dynamical reading in some situations both concerning bleeding, but also for closer focus on wanted attributes.

 

The pedagogical approach using the four-step introduction was experienced as fruitful when introducing the concept to musicians for the first time. This helped the understanding during the process and therefor resulted in more fruitful discussions and reflections between the performers during the session. Starting with live processing says something about possibilities and flexible control over different effects early in the process, and gives the performers a possibility to be a part of deciding aesthetics and building a framework before entering the control aspect.

 

Quotes from the the performers:

Guitarist: “Totally different experience”. “Felt best when I just let go, but that is the hardest part”. “It feels like I’m a midi controller”. “… Hard to focus on what I’m playing”. “Would like to try out more extreme mappings”

Vocalist: “The product is so different because small things can do dramatic changes”. “Musical intention crashes with control”. “It feels like a 2-dimensional way of playing”

Pianoist: “Feels like an extra musician”

 

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/04/06/cross-adaptive-session-with-1st-year-jazz-students-ntnu-march-7-8/feed/ 1 784