Exploring radically new modes of musical interaction in live performance
Category: Sessions
Session with Kim Henry Ortveit
February 22, 2018
-
Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a ...
Session with Michael Duch
February 22, 2018
-
February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is ...
Session with David Moss in Berlin
February 2, 2018
-
Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration ...
Session with 4 singers, Trondheim, August 2017
October 9, 2017
-
Location: NTNU, Studio Olavshallen. Date: August 28 2017 Participants: Sissel Vera Pettersen, vocals Ingrid Lode, vocals Heidi Skjerve, vocals Tone Åse, vocals Øyvind Brandtsegg, processing Andreas Bergsland, observer and video documentation Thomas Henriksen, sound engineer Rune Hoemsnes, sound engineer We ...
Session in UCSD Studio A
September 8, 2017
-
This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all ...
Session with Jordan Morton and Miller Puckette, April 2017
June 9, 2017
-
This session was conducted as part of preparations to the larger session in UCSD Studio A, and we worked on calibration of the analysis methods to Jordans double bass and vocals. Some of the calibration and accomodation of signals also includes ...
Liveconvolver experiences, San Diego
June 7, 2017
-
The liveconvolver has been used in several concerts and sessions in San Diego this spring. I played three concerts with the group Phantom Station (The Loft, Jan 30th, Feb 27th and Mar 27th), where the first involved the liveconvolver. Then ...
Live convolution session in Oslo, March 2017
June 7, 2017
-
Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation). The focus for this session was to work with the new live convolver in Ableton Live Setup - getting to know the Convolver We ...
Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017
June 7, 2017
-
Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) The focus for this session was to play with, fine tune and work further on the mappings we sat up during the last session at NMH in November. Due ...
Cross adaptive session with 1st year jazz students, NTNU, March 7-8
April 6, 2017
-
This is a description of a session with first year jazz students at NTNU recorded March 7 and 8. The session was organized as part of the ensemble teaching that is given to jazz students at NTNU, and was meant to take care ...
Live convolution with Kjell Nordeson
March 23, 2017
-
Session at UCSD March 14. Kjell Nordeson: Drums Øyvind Brandtsegg: Vocals, Convolver. Contact mikes In this session, we explore the use of convolution with contact mikes on the drums to reduce feedback and cross-bleed. There is still some bleed from ...
Session with classical percussion students at NTNU, February 20, 2017
March 10, 2017
-
Introduction: This session was a first attempt in trying out cross-adaptive processing with pre-composed material. Two percussionists, Even Hembre and Arne Kristian Sundby, students at the classical section, were invited to perform a composition written for two tambourines. The musicians ...
Convolution experiments with Jordan Morton
March 1, 2017
-
Jordan Morton is a bassist and singer, she regularly performs using both instruments combined. This provides an opportunity to explore how the liveconvolver can work when both the IR and the live input are generated by the same musician. We did a ...
Docmarker tool
February 16, 2017
-
Docmarker During our studio sessions and other practical research work sessions, we noted that we needed a tool to annotate documentation streams. The stream could be an audio file, a video or some line of timed events. Audio editors and ...
Session UCSD 14. Februar 2017
February 15, 2017
-
Session objective The session objective was to explore the live convolver, how it can affect our playing together and how it can be used. New convolver functionality for this session is the ability to trigger IR update via transient detection, ...
Crossadaptive session NTNU 12. December 2016
December 16, 2016
-
Participants: Trond Engum (processing musician) Tone Åse (vocals) Carl Haakon Waadeland (drums and percussion) Andreas Bergsland (video) Thomas Henriksen (sound technician) Video digest from session: https://www.youtube.com/watch?v=ktprXKVdqF4&feature=youtu.be Session objective and focus: The main focus in this session was to explore other ...
Oslo, First Session, October 18, 2016
December 12, 2016
-
First Oslo Session. Documentation of process 18.11.2016 Participants Gyrid Kaldestad, vocal Bernt Isak Wærstad, guitar Bjørnar Habbestad, flute Observer and Video Mats Claesson The Session took place in one of the sound studios at the Norwegian Academy of Music, Oslo ...
Multi-camera recording and broadcasting
November 21, 2016
-
Audio and video documentaion is often an important component of projects that analyse or evaluate musical performance and/or interaction. This is also the case in the Cross Adaptive project where every session was to be recorded in video and multi-track ...
Session 19. October 2016
October 31, 2016
-
Location: Kjelleren, Olavskvartalet, NTNU, Trondheim Participants: Maja S. K. Ratkje, Siv Øyunn Kjenstad, Øyvind Brandtsegg, Trond Engum, Andreas Bergsland, Solveig Bøe, Sigurd Saue, Thomas Henriksen Session objective and focus Although the trio BRAG RUG has experimented with crossadaptive techniques in rehearsals and ...
Session 20. – 21. September
September 30, 2016
-
Location: Studio Olavskvartalet, NTNU, Trondheim Participants: Trond Engum, Andreas Bergsland, Tone Åse, Thomas Henriksen, Oddbjørn Sponås, Hogne Kleiberg, Martin Miguel Almagro, Oddbjørn Sponås, Simen Bjørkhaug, Ola Djupvik, Sondre Ferstad, Björn Petersson, Emilie Wilhelmine Smestad, David Anderson, Ragnhild Fangel Session objective and focus: This post is a description of a session with 3rd year Jazz students at NTNU. It was the first session following the intended procedure ...
Mixing with Gary
June 16, 2016
-
During our week in London we had some sessions with Gary Bromham, first at the Academy of Contemporary Music in Guildford on the June 7th , then at QMUL later in the week. We wanted to experiment with cross-adpative techniques ...
Mixing example, simplified interaction demo
May 24, 2016
-
When working further with some of the examples produced in an earlier session , I wanted to see if I could demonstrate the influence of one instrument's influence of the the other instruments sound more clearly. Here' I've made an example where the ...
Introductory session NTNU, Trond/Øyvind
May 13, 2016
-
Date: 3 May 2016 Location: NTNU Mustek Participants: Trond Engum, Øyvind Brandtsegg Session objective and focus: Test ourselves as musicians in cross adaptive setting. MEaning, test how we react to being in the role of the processed Test out different mappings, ...
Introductory session, NTNU, Bernt/Øyvind
May 13, 2016
-
Date: 26 April 2016 Location: NTNU Mustek Participants: Bernt Isak Wærstad, Øyvind Brandtsegg Session objective and focus: Test ourselves as musicians in cross adaptive setting. We have usually been the processing musicians, now we should test ourselves as the victims ...
Westerdal session April 2016
May 13, 2016
-
Session at Westerdal ACT, OSLO Participants: Ylva Øyen Brandtsegg, Øyvind Brandtsegg Objective: Studio use of cross_shimmer effect Takes Take 1: Cross_shimmer: Vocals as spectral input, Drumset as exciter Take2: As above, another take on the same musical goal Comments: * Feedback not ...
Tape to zero 2016
May 13, 2016
-
Concert : Tape to zero festival, April 21 2016, Victoria Jazz, Nasjonal jazzscene Oslo Participants: Maja S.K. Ratkje, Siv Øyunn Kjenbstad, Øyvind Brandtsegg The objective, in the context of this research project, was live use of the "cross_shimmer" effect, testing musical applications ...
Jazz ensemble, spring 2016
May 13, 2016
-
Experimental session in the context of ensemble teaching at the jazz dept at NTNU, April 2016. The objective was to test some simple interaction modes, starting with cross adaptive amplitude control. How will the musicians react to this kind of interaction? ...
Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a Seaboard) and some drum pads. Modulations and patterns played on one part of the instrument will determine how other components of the instrument actually sound out. This is combined with some intricate layering, looping, and quantization techniques that allows shaping of the musical continuum in novel ways. Since the instrument in itself is crossadaptive between its consituent parts (and simply also because we think it sounds great), we wanted to experiment with it within the crossadaptive project too.
Kim Henry’s instrument
The session was done as an interplay between Kim Henry on his instrument and Øyvind Brandtsegg on vocals and crossadaptive processing. It was conducted as an initial exploration of just “seeing what would happen” and trying out various ways of making crossadaptive connections here and there. The two audio examples here are excerpts of longer takes.
February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is a step back in complexity from the crossadaptive interplay, but is interesting for two reasons: One is to check how useful our techniques of modulation is in a setting with more traditional performer control. Where there is only one performer modulating himself, there is a closer relationship between performer intention and timbral result. And two: the reason to do this specifically with Michael is that we know from his work with Lemur and other settings that he intently and intimately relates to the performance environment, the resonances of the room and the general ambience. Due to this focus, we also wanted to use live convolution techniques where he first records an impulse response and then himself play through the same filter. This exposed one feature needed in the live convolver, where one might want to delay the activation of the new impulse response until its recording is complete (otherwise we most certainly will get extreme resonances while recording, since the filter and the exitation is very similar). That technical issue aside, it was musically very interesting to hear how he explored resonances in his own instrument, and also used small amounts of detuning to approach beating effects in the relation between filter and exitation signal. The self-convolution also exposes parts of the instrument spectrum that usually is not so noticeable, like bassy components of high notes, or prominent harmonics that otherwise would be perceptually masked by their merging into the full tone of the instrument.
Take 1, autoadaptive exploration
Take 2, autoadaptive exploration
Self convolution
Self-convolution take 1
Self-convolution take 2
Self-convolution take 3
Self-convolution take 4
Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration with David Moss, as we have learned so much about performance, improvisation and music in general from him on earlier occations.
Earlier the same week I had presented the crossadaptive project for prof. De Campo’s students of computational art and performance with complex systems. This environment of arts and media studies at UdK was particularly conductive to our research, and we had some very interesting discussions.
David Moss – vocals
Øyvind Brandtsegg – crossadaptive processing, vocals
Alberto De Campo – observer
Marija Mickevica – observer, and vocals on one take
More details on these tracks will follow, currently I just upload them here so that the involved parties might get access.
Initial exploration, the (becoming) classic reverb+delay crossadaptive situation
Test session, exploring one effect only
Test session, exploring one effect only (2)
First take
Second take
Third take
Then we did some explorations of David telling stories, live convolving with Øyvind’s impulse responses.
Story 1
Story 2
And we were lucky that student Marija Mickevica wanted to try recording live impulse responses while David was telling stories. Here’s an example:
Story with Marija’s impulse responses
And a final take with David and Øyvind, where all previously tested effects and crossadaptive mappings were enabled. Selective mix of effects and modulations was controlled manually by Øyvind during the take.
Participants:
Sissel Vera Pettersen, vocals
Ingrid Lode, vocals
Heidi Skjerve, vocals
Tone Åse, vocals
Øyvind Brandtsegg, processing
Andreas Bergsland, observer and video documentation
Thomas Henriksen, sound engineer
Rune Hoemsnes, sound engineer
We also had the NTNU documentation team (Martin Kristoffersen and Ola Røed) making a separate video recording of the session.
Session objective and focus:
We wanted to try out crossadaptive processing with similar instruments. Until this session, we had usually used it on a combination of two different instruments, leading to very different analysis conditions. The analysis methods reponds a bit differently to each instrument type, and they also each “trigger” the processing in particular manner. It was thought interesting to try some experiments under more “even” conditions. Using four singers and combining them in different duo configurations, we also saw the potential for gleaming personal expressive differences and approaches to the crossadaptive performance situation. This also allowed them to switch roles, i.e. performing under the processing condition where they previously had the modulating role. No attempt was made to exhaustively try every possible combination of roles and effects, we just wanted to try a variety of scenarios possible with the current resources. The situation proved interesting in so many ways, and further exploration of this situation would be neccessary to probe further the research potential herein.
In addition to the analyzer-modulator variant of crossadaptive processing, we also did several takes of
live convolution
and
streaming convolution
. This session was the very first performative exploration of streaming convolution.
We used a reverb (Valhalla) on one of the signals, and a custom granular reverb (partikkelverb) on the other. The crossadaptive mappings was first designed so that each of the signals could have a “prolongation” effect (larger size for the reverb, more time smearing for the granular effect). However, after the first take, it seemed that the time smearing for the granular effect was not so clearly perceived as a musical gesture. We then replaced the time smearing parameter of the granular effect with a “graininess” parameter (controlling grain duration). This setting was used for the remaining takes. We used transient density combined with amplitude to control the reverb size, where louder and faster singing would make the reverb shorter (smaller). We used dynamic range to control the time smearing parameter of the granular effect, and used transient density to control the grain size (faster singing makes the grains shorter).
Video digest of the session
Crossadaptive analyzer-modulator takes
Crossadaptive take 1: Heidi/Ingrid
Heidi has a reverb controlled by Ingrids amplitude and transient density
– louder and faster singing makes the reverb shorter
Ingrid has a time smearing effect.
– time is more slowed down when Heidi use a larger dynamic range
Crossadaptive take 2: Heidi/Sissel
Heidi has a reverb controlled by Sissels amplitude and transient density
– louder and faster singing makes the reverb shorter
Sissel has a granular effect.
– the effect is more grainy (shorter grain duration) when Heidi play with a higher transient density (faster)
Crossadaptive take 3: Sissel/Tone
Sissel has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Sissel play with a higher transient density (faster)
Crossadaptive take 4: Tone/Ingrid
Ingrid has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Ingrid play with a higher transient density (faster)
Crossadaptive take 5: Tone/Ingrid
Same settings as for take 4
Convolution
Doing live convolution with two singers was thought interesting for the same reasons as listed in the introduction, creating a controlled scenario with two similarly-featured signals. As vocal is in itself one of the richest instruments in terms of signal variation, it was also intersting to explore convolution wwith these instruments. We used the now familiar live convolution techniques, where one of the performers record an impulse response and the other plays through it. In addition, we explored
streaming convolution
, developed by Victor Lazzarini as part of this project. In streaming convolution, the two signals are treated even more equally that what is the case in live convolution. Streaming convolution simply convolves two circular buffers of a predetermined length, allowing both signals to have the exact same role in relation to the other. It also has a “freeze mode”, where updating of the buffer is suspended, allowing one or the other (or both) of the signals to be kept stationary as a filter for the other. This freezing was controlled by a physical pedal, in the same manner as we use a pedal to control IR sampling with live convolution. In some of the videos one can see the singers raising their hand, as a signal to the other that they are now freezing their filter. When the signal is not frozen (i.e. streaming), there is a practically indeterminate latency in the process as seen from the performer’s perspective. This stems from the fact that the input stream is segmented with respect to the filter length. Any feature recorded into the filter will have a position in the filter dependent on when it was recorded, and the perceived latency between an input impulse and the convolver output of course relies on where in the “impulse response” the most significant energy or transient can be found. The techical latency of the filter is still very low, but the perceived latency depends on the material.
Liveconvolver take 1: Tone/Sissel
Tone records the IR
Liveconvolver take 2: Tone/Sissel
Sissel records the IR
Liveconvolver take 3: Heidi/Sissel
Sissel records the IR
Liveconvolver take 4: Heidi/Sissel
Heidi records the IR
Liveconvolver take 5: Heidi/Ingrid
Heidi records the IR
Streaming Convolution
These are the very first performative explorations of the streaming convolution technique.
This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all these performers in different constellations, some new combinations were tested this day. The approach was to explore fairly complex feature-modulator mappings. No particular focus was made on intellectualizing the details of these mappings, but rather experiencing them as a whole, “as instrument”. I had found that simple mappings, although easy to decode and understand for both performer and listener, quickly would “wear out” and become flat, boring or plainly limiting for musical development during the piece. I attempted to create some “rich” mappings, with combinations of different levels of subtlety. Some clearly audible and some subtle timbral effects. The mappings were designed with some specific musical gestures and interactions in mind, and these are listed together with the mapping details for each constellation later in this post.
During this session, we also explored the
live convolver
in terms of how the audio content in the IR affects the resulting creative options and performative environment for the musician playing through the effect. The liveconvolver takes are presented interspersed with the crossadaptive “feature-modulator” (one could say “proper crossadaptive”) takes. Recording of the impulse response for the convolution was triggered via an external pedal controller during performance, and we let each musician in turn have the role of IR recorder.
Participants:
Jordan Morton: double bass and voice
Miller Puckette: guitar
Steven Leffue: sax
Kyle Motl: double bass
Oeyvind Brandtsegg: crossadaptive mapping design, processing
Andrew Munsie: recording engineer
The music played was mostly free improvisations, but two of the takes with Jordan Morton was performances of her compositions. These were composed in dialogue with the system, during and in between, earlier sessions. She both plays the bass and sings, and wanted to explore how phrasing and shaping of precomposed material could be used to expressively control the timbral modulations of the effects processing.
Jordan Morton: bass and voice.
These pieces are composed by Jordan, and she has composed it with an intention of being performed freely, and shaped according to the situation at performance time, allowing the crossaptive modulations ample room for influence on the sound.
“I confess” (Jordan Morton). Bass and voice.
“Backbeat thing” (Jordan Morton). Bass and voice.
The effects used:
Effects on vocals: Delay, Resonant distorted lowpass
Effects on bass: Reverb, Granular tremolo
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Bass spectral flatness, and
Bass spectral flux: both features giving lesser reverb time on bass
Purpose: When the bass becomes more noisy, it will get less reverb
Vocal envelope dynamics (dynamic range), and
Vocal transient density: both features giving lower lowpass filter cutoff frequency on reverb on bass
Purpose: When the vocal becomes more active, the bass reverb will be less pronounced
Bass transient density: higher cutoff frequency (resonant distorted lowpass filter) on vocal
Purpose: to animate a distorted lo-fi effect on the vocals, according to the activity level on bass
Vocal mfcc-diff (formant strength, “pressed-ness”): Send level for granular tremolo on bass
Purpose: add animation and drama to the bass when the vocal becomes more energetic
Bass transient density: lower lowpass filter frequency for the delay on vocal
Purpose: clean up vocal delays when basse becomes more active
Vocal transient density: shorter delay time for the delay on vocal
Bass spectral flux: longer delay time for the delay on vocal
Purpose: just for animation/variation
Vocal dynamic range, and
Vocal transient density: both features giving less feedback for the delay on vocal
Purpose: clean up vocal delay for better articulation on text
Liveconvolver tracks Jordan/Jordan:
The tracks are improvisations. Here, Jordan’s voice was recorded as the impulse response and she played bass through the voice IR. Since she plays both instruments, this provides a unique approach to the live convolution performance situation.
Liveconvolver take 1: Jordan Morton bass and voice
Liveconvolver take 2: Jordan Morton bass and voice
Jordan Morton and Miller Puckette
Liveconvolver tracks Jordan/Miller:
These tracks was improvised by Jordan Morton (bass) and Miller Puckette (guitar). Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Miller records the IR.
Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Jordan records the IR.
Discussion on the performance with live convolution, with Jordan Morton and Miller Puckette.
Miller Puckette and Steven Leffue
These tracks was improvised by Miller Puckette (guitar) and Steven Leffue. The feature-modulator mapping was designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping before the first take. The intention of this strategy was to create an naturally flowing environment of exploration, with not-too-obvious relationships between instrumental gestures and resulting modulations. After the first take, some more detail of selected elements (one for each musician) of the mapping were repeated for the performers, with the anticipation that these features might be explored more consciously.
Take 1:
Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 1. Details of the feature-modulator mapping is given below.
Discussion 1 on the crossadaptive performance, with Miller Puckette and Steven Leffue. On the relationship between what you play and how that modulates the effects, on balance of monitoring, and other issues.
The effects used:
Effects on guitar: Spectral delay
Effects on sax: Resonant distorted lowpass, Spectral shift, Reverb
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Guitar envelope crest: longer reverb time on sax
Purpose: dynamic guitar playing will make a big room for the sax
Guitar transient density: higher cutoff frequency for reverb highpass filter
and
lower cutoff frequency for reverb lowpass filter
Purpose: when guitar is more active, the reverb on sax will be less full (less highs and less lows)
Guitar transient density (again): downward spectral shift on sax
Purpose: animation and variation
Guitar spectral flux: higher cutoff frequency (resonant distorted lowpass filter) on sax
Purpose: just for animation and variation. Note that spectral flux (especially on the guitar) will also give high values on single notes in the low register (the lowest octave), in addition to the expected behaviour of giving higher values on more noisy sounds.
Sax envelope crest: less delay send on guitar
Purpose: more dynamic sax playing will “dry up” the guitar delays, must play long notes to open the sending of guitar to delay
Sax transient density: longer delay time on guitar. This modulation mapping was also gated by the rms amplitude of the sax (so that it is only active when sax gets loud)
Purpose: load and fast sax will give more distinct repetitions (further apart) on the guitar delay
Sax pitch: increase spectral delay shaping of the guitar (spectral delay with different delay times for each spectral band)
Purpose: more unnatural (crazier) effect on guitar when sax goes high
Sax spectral flux: more feedback on guitar delay
Purpose: noisy sax playing will give more distinct repetitions (more repetitions) on the guitar delay
Take 2:
Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 2. The feature-modulator mapping was the same as for take 1.
Discussion 2 on the crossadaptive performance, with Miller Puckette and Steven Leffue. Instructions and intellectualizing the mapping made it harder
Liveconvolver tracks:
Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Miller records the IR.
Discussion 1 on playing with the live convolver, with Miller Puckette and Steven Leffue.
Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Steven records the IR.
Discussion 2 on playing with the live convolver, with Miller Puckette and Steven Leffue.
Steven Leffue and Kyle Motl
Two different feature-modulator mappings was used, and we present one take of each mapping. Like the mappings used for Miller/Steven, these were designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping. The mapping used for the first take closely resembles the mapping for Steven/Miller, with just a few changes to accomodate for the different musical context and how the analysis methods responds to the instruments.
Bass transient density: shorter reverb time on sax
The reverb equalization (highpass and lowpass was skipped
Bass envelope crest: increase send level for granular processing on sax
Bass rms amplitude: Parametric morph between granular tremolo and granular time stretch on sax
I
n the first crossadaptive take in this duo, Kyle commented that the amount of delay made it hard to play, that any fast phrases just would turn into a mush. It seemed the choice of effects and the modulations was not optimal, so we tried another configuration of effects (and thus another mapping of features to modulators)
This mapping had earlier been used for duo playing between Kyle (bass) and Øyvind (vocal) on several occations, and it was merely adjusted to accomodate for the different timbral dynamics of the saxophone. In this way, Kyle was familiar with the possibilities of the mapping, but not with the context in which it would be used.
The granular processing done on both instrument was done with the Hadron Particle Synthesizer, which allows a multidimensional parameter navigation through a relatively simple modulation interface (X, Y and 4 expression controllers). The specifics of the actual modulation routing and mapping within Hadron can be described, but it was thought that in the context of the current report, further technical detail would only take away from the clarity of the presentation. Even though the details of the parameter mapping was designed deliberately, at this point in the performative approach to playing with it, we just did no longer pay attention to technical specifics. Rather, the focus was on letting go and trying to experience the timbral changes rather than intellectualizing them.
The effects used:
Effects on sax: Delay, granular processing
Effects on bass: Reverb, granular processing
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Sax envelope crest: shorter reverb time on bass
Sax rms amp: higher cutoff frequency for reverb highpass filter
Purpose: louder sax will make the bass reverb thinner
Sax transient density: lower cutoff frequency for reverb lowpass filter
Sax envelope dynamics (dynamic range): higher cutoff frequency for reverb lowpass filter
Purpose: faster sax playing will make the reverb less prominent, but more dynamic playing will enhance it
Sax spectral flux: Granular processing state morph (Hadron X-axis) on bass
Sax envelope dynamics: Granular processing state morph (Hadron Y-axis) on bass
Sax rms amplitude: Granular processing state morph (Hadron Y-axis) on bass
Purpose: animation and variation
Bass spectral flatness: higher cutoff frequency of the delay feedback path on sax
Purpose: more noisy bass playing will enhance delayed repetitions
Bass envelope dynamics: less delay feedback on sax
Purpose: more dynamic playing will give less repetitions in delay on sax
Bass pitch: upward spectral shift on sax
Purpose: animation and variation, pulling in same direction (up pitch equals shift up)
Bass transient density: Granular process expression 1 (Hadron) on sax
Bass rms amplitude: Granular process expression 2 & 3 (Hadron) on sax
Bass rhythmic irregularity: Granular process expression 4 (Hadron) on sax
Bass MFCC diff: Granular processing state morph (Hadron X-axis) on sax
Bass envelope crest: Granular processing state morph (Hadron Y-axis) on sax
Purpose: multidimensional and rich animation and variation
On the second crossadaptive take between Steven and Kyle, I asked: “Does this hinder interaction or does or make something interesting happen?”
Kyle says it hinders the way they would normally play together. “We can’t go to our normal thing because there’s a third party, the mediation in between us. It is another thing to consider.” Also, the balance between the acoustic sound and the processing is difficult. This is even more difficult when playing with headphones, as the dynamic range and response is different. Sometimes the processing will seem very quiet in relation to the acoustic sound of the instruments, and at other times it will be too loud.
Steven says at one point he started
not
paying attention to the processing and focused mostly on what Kyle was doing. “Just letting the processing be the reaction to that, not treating it as an equal third party. … Totally paying attention to what the other musician is doing and just keeping up with him, not listening to myself.” This also mirrors the usual options of improvisational listening strategy and focus, of listening to the whole or focusing on specific elements in the resulting sound image.
Longer reflective conversation between Steven Leffule, Kyle Motl and Øyvind Brandtsegg. Done after the crossadaptive feature-modulator takes, touching on some of the problems encountered, but also reflecting on the wider context of different kinds of music accompaniment systems.
Liveconvolver tracks:
Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Discussion 1 on playing with the live convolver, with Steven Leffue and Kyle Motl.
Discussion 2 on playing with the live convolver, with Steven Leffue and Kyle Motl.