Exploring radically new modes of musical interaction in live performance
Category: Examples
Session with Kim Henry Ortveit
February 22, 2018
-
Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a ...
Session with Michael Duch
February 22, 2018
-
February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is ...
Session with David Moss in Berlin
February 2, 2018
-
Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration ...
Session in UCSD Studio A
September 8, 2017
-
This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all ...
Cross adaptive session with 1st year jazz students, NTNU, March 7-8
April 6, 2017
-
This is a description of a session with first year jazz students at NTNU recorded March 7 and 8. The session was organized as part of the ensemble teaching that is given to jazz students at NTNU, and was meant to take care ...
Convolution experiments with Jordan Morton
March 1, 2017
-
Jordan Morton is a bassist and singer, she regularly performs using both instruments combined. This provides an opportunity to explore how the liveconvolver can work when both the IR and the live input are generated by the same musician. We did a ...
Crossadaptive session NTNU 12. December 2016
December 16, 2016
-
Participants: Trond Engum (processing musician) Tone Åse (vocals) Carl Haakon Waadeland (drums and percussion) Andreas Bergsland (video) Thomas Henriksen (sound technician) Video digest from session: https://www.youtube.com/watch?v=ktprXKVdqF4&feature=youtu.be Session objective and focus: The main focus in this session was to explore other ...
Adding new array operations to Csound II: the Mel-frequency filterbank
July 9, 2016
-
As I have discussed before in my previous post, as part of this project we have been selecting a number of useful operations to implement in Csound, as part of its array opcode collection. We have looked at the components ...
Evolving Neural Networks for Cross-adaptive Audio Effects
June 27, 2016
-
I'm Iver Jordal and this is my first blog post here. I have studied music technology for approximately two years and computer science for almost five years. During the last 6 months I've been working on a specialization project which ...
The Analyzer and MIDIator plugins
June 2, 2016
-
... so with all these DAW examples "where are the plugins" you might ask. Well, the most up-to-date versions will always be available in the code repo at github. BUT, I've also uploaded precompiled versions of the plugins for Windows ...
Simple analyzer-modulator setup for Reaper
June 2, 2016
-
Following up on the recent Ableton Live set, here's a simple analyzer-modulator project for Reaper. The routing of signals is simpler and more flexible in Reaper, so we do not have the clutter of an extra channel to enable MIDI out, rather we ...
Simple analyzer-modulator setup for Ableton Live
June 2, 2016
-
I've created a simple Live set to show how to configure the analyzer and MIDIator in Ableton Live. There are some small snags and peculiarities (read on), but basically it runs ok. The analyzer will not let audio through, so we ...
Simple crossadaptive mixing template for Logic
June 1, 2016
-
Next week we'll go to London for seminars at Queen Mary and De Montfort. We'll also do a mixing session with Gary Bromham, to experiment with the crossadaptive modulation techniques in a postproduction setting. For this purpose I've done a simple session template in ...
Mixing example, simplified interaction demo
May 24, 2016
-
When working further with some of the examples produced in an earlier session , I wanted to see if I could demonstrate the influence of one instrument's influence of the the other instruments sound more clearly. Here' I've made an example where the ...
Cross adaptive mixing in a standard DAW
May 15, 2016
-
To enable the use of these techniques in a common mixing situation, we’ve made some example configurations in Reaper. The idea is to extract some feature of the modulator signal (using common tools like EQs and Compressors rather than more ...
Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a Seaboard) and some drum pads. Modulations and patterns played on one part of the instrument will determine how other components of the instrument actually sound out. This is combined with some intricate layering, looping, and quantization techniques that allows shaping of the musical continuum in novel ways. Since the instrument in itself is crossadaptive between its consituent parts (and simply also because we think it sounds great), we wanted to experiment with it within the crossadaptive project too.
Kim Henry’s instrument
The session was done as an interplay between Kim Henry on his instrument and Øyvind Brandtsegg on vocals and crossadaptive processing. It was conducted as an initial exploration of just “seeing what would happen” and trying out various ways of making crossadaptive connections here and there. The two audio examples here are excerpts of longer takes.
February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is a step back in complexity from the crossadaptive interplay, but is interesting for two reasons: One is to check how useful our techniques of modulation is in a setting with more traditional performer control. Where there is only one performer modulating himself, there is a closer relationship between performer intention and timbral result. And two: the reason to do this specifically with Michael is that we know from his work with Lemur and other settings that he intently and intimately relates to the performance environment, the resonances of the room and the general ambience. Due to this focus, we also wanted to use live convolution techniques where he first records an impulse response and then himself play through the same filter. This exposed one feature needed in the live convolver, where one might want to delay the activation of the new impulse response until its recording is complete (otherwise we most certainly will get extreme resonances while recording, since the filter and the exitation is very similar). That technical issue aside, it was musically very interesting to hear how he explored resonances in his own instrument, and also used small amounts of detuning to approach beating effects in the relation between filter and exitation signal. The self-convolution also exposes parts of the instrument spectrum that usually is not so noticeable, like bassy components of high notes, or prominent harmonics that otherwise would be perceptually masked by their merging into the full tone of the instrument.
Take 1, autoadaptive exploration
Take 2, autoadaptive exploration
Self convolution
Self-convolution take 1
Self-convolution take 2
Self-convolution take 3
Self-convolution take 4
Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration with David Moss, as we have learned so much about performance, improvisation and music in general from him on earlier occations.
Earlier the same week I had presented the crossadaptive project for prof. De Campo’s students of computational art and performance with complex systems. This environment of arts and media studies at UdK was particularly conductive to our research, and we had some very interesting discussions.
David Moss – vocals
Øyvind Brandtsegg – crossadaptive processing, vocals
Alberto De Campo – observer
Marija Mickevica – observer, and vocals on one take
More details on these tracks will follow, currently I just upload them here so that the involved parties might get access.
Initial exploration, the (becoming) classic reverb+delay crossadaptive situation
Test session, exploring one effect only
Test session, exploring one effect only (2)
First take
Second take
Third take
Then we did some explorations of David telling stories, live convolving with Øyvind’s impulse responses.
Story 1
Story 2
And we were lucky that student Marija Mickevica wanted to try recording live impulse responses while David was telling stories. Here’s an example:
Story with Marija’s impulse responses
And a final take with David and Øyvind, where all previously tested effects and crossadaptive mappings were enabled. Selective mix of effects and modulations was controlled manually by Øyvind during the take.
This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all these performers in different constellations, some new combinations were tested this day. The approach was to explore fairly complex feature-modulator mappings. No particular focus was made on intellectualizing the details of these mappings, but rather experiencing them as a whole, “as instrument”. I had found that simple mappings, although easy to decode and understand for both performer and listener, quickly would “wear out” and become flat, boring or plainly limiting for musical development during the piece. I attempted to create some “rich” mappings, with combinations of different levels of subtlety. Some clearly audible and some subtle timbral effects. The mappings were designed with some specific musical gestures and interactions in mind, and these are listed together with the mapping details for each constellation later in this post.
During this session, we also explored the
live convolver
in terms of how the audio content in the IR affects the resulting creative options and performative environment for the musician playing through the effect. The liveconvolver takes are presented interspersed with the crossadaptive “feature-modulator” (one could say “proper crossadaptive”) takes. Recording of the impulse response for the convolution was triggered via an external pedal controller during performance, and we let each musician in turn have the role of IR recorder.
Participants:
Jordan Morton: double bass and voice
Miller Puckette: guitar
Steven Leffue: sax
Kyle Motl: double bass
Oeyvind Brandtsegg: crossadaptive mapping design, processing
Andrew Munsie: recording engineer
The music played was mostly free improvisations, but two of the takes with Jordan Morton was performances of her compositions. These were composed in dialogue with the system, during and in between, earlier sessions. She both plays the bass and sings, and wanted to explore how phrasing and shaping of precomposed material could be used to expressively control the timbral modulations of the effects processing.
Jordan Morton: bass and voice.
These pieces are composed by Jordan, and she has composed it with an intention of being performed freely, and shaped according to the situation at performance time, allowing the crossaptive modulations ample room for influence on the sound.
“I confess” (Jordan Morton). Bass and voice.
“Backbeat thing” (Jordan Morton). Bass and voice.
The effects used:
Effects on vocals: Delay, Resonant distorted lowpass
Effects on bass: Reverb, Granular tremolo
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Bass spectral flatness, and
Bass spectral flux: both features giving lesser reverb time on bass
Purpose: When the bass becomes more noisy, it will get less reverb
Vocal envelope dynamics (dynamic range), and
Vocal transient density: both features giving lower lowpass filter cutoff frequency on reverb on bass
Purpose: When the vocal becomes more active, the bass reverb will be less pronounced
Bass transient density: higher cutoff frequency (resonant distorted lowpass filter) on vocal
Purpose: to animate a distorted lo-fi effect on the vocals, according to the activity level on bass
Vocal mfcc-diff (formant strength, “pressed-ness”): Send level for granular tremolo on bass
Purpose: add animation and drama to the bass when the vocal becomes more energetic
Bass transient density: lower lowpass filter frequency for the delay on vocal
Purpose: clean up vocal delays when basse becomes more active
Vocal transient density: shorter delay time for the delay on vocal
Bass spectral flux: longer delay time for the delay on vocal
Purpose: just for animation/variation
Vocal dynamic range, and
Vocal transient density: both features giving less feedback for the delay on vocal
Purpose: clean up vocal delay for better articulation on text
Liveconvolver tracks Jordan/Jordan:
The tracks are improvisations. Here, Jordan’s voice was recorded as the impulse response and she played bass through the voice IR. Since she plays both instruments, this provides a unique approach to the live convolution performance situation.
Liveconvolver take 1: Jordan Morton bass and voice
Liveconvolver take 2: Jordan Morton bass and voice
Jordan Morton and Miller Puckette
Liveconvolver tracks Jordan/Miller:
These tracks was improvised by Jordan Morton (bass) and Miller Puckette (guitar). Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Miller records the IR.
Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Jordan records the IR.
Discussion on the performance with live convolution, with Jordan Morton and Miller Puckette.
Miller Puckette and Steven Leffue
These tracks was improvised by Miller Puckette (guitar) and Steven Leffue. The feature-modulator mapping was designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping before the first take. The intention of this strategy was to create an naturally flowing environment of exploration, with not-too-obvious relationships between instrumental gestures and resulting modulations. After the first take, some more detail of selected elements (one for each musician) of the mapping were repeated for the performers, with the anticipation that these features might be explored more consciously.
Take 1:
Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 1. Details of the feature-modulator mapping is given below.
Discussion 1 on the crossadaptive performance, with Miller Puckette and Steven Leffue. On the relationship between what you play and how that modulates the effects, on balance of monitoring, and other issues.
The effects used:
Effects on guitar: Spectral delay
Effects on sax: Resonant distorted lowpass, Spectral shift, Reverb
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Guitar envelope crest: longer reverb time on sax
Purpose: dynamic guitar playing will make a big room for the sax
Guitar transient density: higher cutoff frequency for reverb highpass filter
and
lower cutoff frequency for reverb lowpass filter
Purpose: when guitar is more active, the reverb on sax will be less full (less highs and less lows)
Guitar transient density (again): downward spectral shift on sax
Purpose: animation and variation
Guitar spectral flux: higher cutoff frequency (resonant distorted lowpass filter) on sax
Purpose: just for animation and variation. Note that spectral flux (especially on the guitar) will also give high values on single notes in the low register (the lowest octave), in addition to the expected behaviour of giving higher values on more noisy sounds.
Sax envelope crest: less delay send on guitar
Purpose: more dynamic sax playing will “dry up” the guitar delays, must play long notes to open the sending of guitar to delay
Sax transient density: longer delay time on guitar. This modulation mapping was also gated by the rms amplitude of the sax (so that it is only active when sax gets loud)
Purpose: load and fast sax will give more distinct repetitions (further apart) on the guitar delay
Sax pitch: increase spectral delay shaping of the guitar (spectral delay with different delay times for each spectral band)
Purpose: more unnatural (crazier) effect on guitar when sax goes high
Sax spectral flux: more feedback on guitar delay
Purpose: noisy sax playing will give more distinct repetitions (more repetitions) on the guitar delay
Take 2:
Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 2. The feature-modulator mapping was the same as for take 1.
Discussion 2 on the crossadaptive performance, with Miller Puckette and Steven Leffue. Instructions and intellectualizing the mapping made it harder
Liveconvolver tracks:
Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Miller records the IR.
Discussion 1 on playing with the live convolver, with Miller Puckette and Steven Leffue.
Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Steven records the IR.
Discussion 2 on playing with the live convolver, with Miller Puckette and Steven Leffue.
Steven Leffue and Kyle Motl
Two different feature-modulator mappings was used, and we present one take of each mapping. Like the mappings used for Miller/Steven, these were designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping. The mapping used for the first take closely resembles the mapping for Steven/Miller, with just a few changes to accomodate for the different musical context and how the analysis methods responds to the instruments.
Bass transient density: shorter reverb time on sax
The reverb equalization (highpass and lowpass was skipped
Bass envelope crest: increase send level for granular processing on sax
Bass rms amplitude: Parametric morph between granular tremolo and granular time stretch on sax
I
n the first crossadaptive take in this duo, Kyle commented that the amount of delay made it hard to play, that any fast phrases just would turn into a mush. It seemed the choice of effects and the modulations was not optimal, so we tried another configuration of effects (and thus another mapping of features to modulators)
This mapping had earlier been used for duo playing between Kyle (bass) and Øyvind (vocal) on several occations, and it was merely adjusted to accomodate for the different timbral dynamics of the saxophone. In this way, Kyle was familiar with the possibilities of the mapping, but not with the context in which it would be used.
The granular processing done on both instrument was done with the Hadron Particle Synthesizer, which allows a multidimensional parameter navigation through a relatively simple modulation interface (X, Y and 4 expression controllers). The specifics of the actual modulation routing and mapping within Hadron can be described, but it was thought that in the context of the current report, further technical detail would only take away from the clarity of the presentation. Even though the details of the parameter mapping was designed deliberately, at this point in the performative approach to playing with it, we just did no longer pay attention to technical specifics. Rather, the focus was on letting go and trying to experience the timbral changes rather than intellectualizing them.
The effects used:
Effects on sax: Delay, granular processing
Effects on bass: Reverb, granular processing
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Sax envelope crest: shorter reverb time on bass
Sax rms amp: higher cutoff frequency for reverb highpass filter
Purpose: louder sax will make the bass reverb thinner
Sax transient density: lower cutoff frequency for reverb lowpass filter
Sax envelope dynamics (dynamic range): higher cutoff frequency for reverb lowpass filter
Purpose: faster sax playing will make the reverb less prominent, but more dynamic playing will enhance it
Sax spectral flux: Granular processing state morph (Hadron X-axis) on bass
Sax envelope dynamics: Granular processing state morph (Hadron Y-axis) on bass
Sax rms amplitude: Granular processing state morph (Hadron Y-axis) on bass
Purpose: animation and variation
Bass spectral flatness: higher cutoff frequency of the delay feedback path on sax
Purpose: more noisy bass playing will enhance delayed repetitions
Bass envelope dynamics: less delay feedback on sax
Purpose: more dynamic playing will give less repetitions in delay on sax
Bass pitch: upward spectral shift on sax
Purpose: animation and variation, pulling in same direction (up pitch equals shift up)
Bass transient density: Granular process expression 1 (Hadron) on sax
Bass rms amplitude: Granular process expression 2 & 3 (Hadron) on sax
Bass rhythmic irregularity: Granular process expression 4 (Hadron) on sax
Bass MFCC diff: Granular processing state morph (Hadron X-axis) on sax
Bass envelope crest: Granular processing state morph (Hadron Y-axis) on sax
Purpose: multidimensional and rich animation and variation
On the second crossadaptive take between Steven and Kyle, I asked: “Does this hinder interaction or does or make something interesting happen?”
Kyle says it hinders the way they would normally play together. “We can’t go to our normal thing because there’s a third party, the mediation in between us. It is another thing to consider.” Also, the balance between the acoustic sound and the processing is difficult. This is even more difficult when playing with headphones, as the dynamic range and response is different. Sometimes the processing will seem very quiet in relation to the acoustic sound of the instruments, and at other times it will be too loud.
Steven says at one point he started
not
paying attention to the processing and focused mostly on what Kyle was doing. “Just letting the processing be the reaction to that, not treating it as an equal third party. … Totally paying attention to what the other musician is doing and just keeping up with him, not listening to myself.” This also mirrors the usual options of improvisational listening strategy and focus, of listening to the whole or focusing on specific elements in the resulting sound image.
Longer reflective conversation between Steven Leffule, Kyle Motl and Øyvind Brandtsegg. Done after the crossadaptive feature-modulator takes, touching on some of the problems encountered, but also reflecting on the wider context of different kinds of music accompaniment systems.
Liveconvolver tracks:
Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Discussion 1 on playing with the live convolver, with Steven Leffue and Kyle Motl.
Discussion 2 on playing with the live convolver, with Steven Leffue and Kyle Motl.
This is a description of a session with first year jazz students at NTNU recorded March 7 and 8. The session was organized as part of the ensemble teaching that is given to jazz students at NTNU, and was meant to take care of both the learning outcomes from the normal ensemble teaching, and also aspects related to the cross adaptive project.
Musicians:
Håvard Aufles, Thea Ellingsen Grant, Erlend Vangen Kongstorp, Rino Sivathas, Øyvind Frøberg Mathisen, Jonas Enroth, Phillip Edwards Granly, Malin Dahl Ødegård and Mona Thu Ho Krogstad.
Processing musician:
Trond Engum
Video documentation:
Andreas Bergsland
Sound technician:
Thomas Henriksen
Video digest from the session:
Preparation:
Based on our earlier experiences with bleeding between microphones we located instruments in separate rooms. Since there was quit a big group of different performers it was important that changing set-up took as little time as possible. There was also prepared a system set-up beforehand based on the instruments in use. To gain an understanding of the project from the performer side as early in the process as possible we used the same four step chronology when introducing the performers to the set-up.
Start with individual instruments trying different effects through live processing and decide together with the performers what effects most suitable to add to their instrument.
Introducing the analyser and decide, based on input form the performers, which methods best suited for controlling different effects from their instrument.
Introducing adaptive processing were one performer is controlling the effects on the other, and then repeat vice versa.
Introducing cross-adaptive processing were all previous choices and mappings are opened up for both performers.
Session report:
Day 1. Tuesday 7
th
March
Trumpet and drums
Sound example 1:
(Step 1) Trumpet live processed with two different effects, convolution (impulse response from water) and overdrive.
The performer was satisfied with the chosen effects, also because the two were quite different in sound quality. The overdrive was experienced as nice, but he would not like to have it present all the time. We decided to save these effects for later use on trumpet, and be aware of dynamic control on the overdrive.
Sound example 2:
(Step 1) Drums live processed with dynamically changing delay and a pitch shift 2 octaves down. The performer found the chosen effects interesting, and the mapping was saved for later use.
Sound example 3:
(Step 1) Before entering the analyser and adaptive processing we wanted to try playing together with the effects we had chosen to see if they blended well together. The trumpet player had some problems with hearing the drums during the performance, felt as they were a bit in the background. We found out that the direct sound of the drums was a bit low in the mix, and this was adjusted. We discussed that it is possible to make the direct sound of both instruments louder or softer depending what the performer wants to achieve.
Sound example 4.
(Step 2/3) For this example we entered into the analyser using transient density on drums. This was tried out by showing the analyser at the same time as doing an accelerando on drums. This was then set up as an adaptive control from drums on the trumpet. For control, the trumpet player had a suggestion that the more transient density the less convolution effect was added to the trumpet (less send to a convolution effect with a recording of water). The reason for this was that it could make more sense to have more water on slow ambient parts than on the faster hectic parts. At the same time he suggested that the opposite should happen when adding overdrive to the trumpet by transient density meaning that the more transient density the more overdrive on the trumpet. During the first take a reverb was added to the overdrive in order to blend the sound more into the production. It felt like the dynamical control over the effects was a bit difficult because the water disappeared to easily, and the overdrive was introduced to easily. We agreed to fine-tune the dynamical control before doing the actual test that is present as sound example 4.
Sound example 5:
For this example we changed roles and enabled the trumpet to control the drums (adaptive processing). We followed a suggestion from the trumpet player and used pitch as an analyses parameter. We decided to use this to control the delay effect on the drums. Low notes produced long gaps between delays, whereas high notes produced small gap between delays. This was maybe not the best solution for getting good dynamical control, but we decide to keep this anyway.
Sound example 6:
Cross adaptive performance using the effects and control mappings introduced in example 4 and 5. This was a nice experience for the musicians. Even though it still felt a bit difficult to control it was experienced as musical meaningful. Drummer: “Nice to play a steady grove, and listen to how the trumpet changed the sound of my instrument”.
Vocals and piano
Sound example 7:
We had now changed the instrumentation over to vocals and piano, and we started with a performance doing live processing on both instruments. The vocals were processed using two different effects using a delay, and convolution through a recording of small metal parts. The piano was processed using an overdrive and convolution through water.
Sound example 8:
Cross adaptive performance where the piano was analysed by rhythmical consonance controlling the delay effect on vocals. The vocal was analysed by transient density controlling the convolution effect on the piano. Both musicians found this difficult, but musically meaningful. Sometimes the control aspect was experienced as counterintuitive to the musical intention. Pianist: It felt like there was a 3rd musician present.
Saxophone self-adaptive processing
Sound example 9:
We started with a performance doing live processing to familiarize the performer with the effects. The performer found the augmentation of extended techniques as clicks and pops interesting since this magnified “small” sounds.
Sound example 10:
Self-adaptive processing performances where the saxophone was analysed by transient density and then used to control two different convolution effects (recording of metal parts and recording of a cymbal). The first one resulting in a delay effect the second as a reverb. The higher transient density in the analyses the more delay and less reverb and vice versa. The performer experienced the quality of the effects quit similar so we removed the delay effect.
Sound example 11:
Self-adaptive processing performances using the same set-up but changing the delay effect to overdrive. The use of overdrive on saxophone did not bring anything new to the table the way it was set up since the acoustic sound of the instrument could sound similar to the effect when putting in strong energy.
Day 2. Wednesday 8
th
March
Saxophone and piano
Sound example 12:
Performance with saxophone and live processing, familiarizing the performer with the different effects and then choose which of the effects to bring further into the session. Performer found this interesting and wanted to continue with reverb ideas.
Sound example 13:
Performance with piano and live processing. The performer especially liked the last part with the delays – Saxophonist: “It was like listening to the sound under water (convolution with water) sometimes, and sometimes like listening to an old radio (overdrive)”. Piano wanted to keep the effects that were introduced.
Sound example 14:
Adaptive processing, controlling delay on saxophone from the piano by using analyses of the transient density. The higher transient density, the larger gap between delays on the saxophone. The saxophone player found it difficult to interact since the piano had a clean sound during performance. The piano on the other hand felt in control over the effect that was added.
Sound example 15:
Adaptive processing using saxophone to control piano. We analyzed the rhythmical consonance on saxophone. The higher degree of consonance, the more convolution effect (water) was added to piano and vice versa. Saxophone didn’t feel in control during performance, and guessed it was due to not holding a steady rhythm over a longer period. The direct sound of the piano was also a bit loud in the mix making the added effect a bit low in the mix. Piano felt that saxophone was in control, but agreed to the point that the analyses was not able to read to the limit because of the lack of a steady rhythm over a longer time period.
Sound example 16:
Crossadptive performance using the same set-up as in example 14 and 15. Both performers felt in control, and started to explore more of the possibilities. Interesting point when the saxophone stops to play since the rhythmical consonance analyses will make a drop as soon as it starts to read again. This could result in strong musical statements.
Sound example 17:
Crossadaptive performance keeping the same setting but adding rms analyses on the saxophone to control a delay on the piano (the higher rms the less delay and vice versa).
Vocals and electric guitar
Sound example 18:
Performance with vocals and live processing. Vocalist: “It is fun, but something you need to get use to, needs a lot of time”.
Sound example 19:
Performance with Guitar and live processing. Guitarist: “Adapted to the effects, my direct sound probably sounds terrible, feel that I`m loosing my touch, but feels complementary and a nice experience”.
Sound example 20:
Performance with adaptive processing. Analyzing the guitar using rms and transient density. The higher transient density the more delay added to the vocal, and higher rms the less reverb added to the vocal. Guitar: I feel like a remote controller and it is hard to focus on what I play sometimes. Vocalist: “Feels like a two dimensional way of playing”.
Sound example 21:
Performance with adaptive processing. Controlling the guitar by vocals. Analyzing the rhythmical consonance on the vocal to control the time gap between delays inserted on the guitar. Higher rhythmical consonance results in larger gaps and vice versa. The transient density on vocal controls the amount of pitch shift added to the guitar. The higher transient density the less volume is sent to the pitch shift.
Sound example 22:
Performance with cross adaptive processing using the same settings as in sound example 20 and 21.
Vocalist: “It is another way of making music, I think”. Guitarist: “I feel control and I feel my impact, but musical intention really doesn’t fit with what is happening – which is an interesting parameter. Changing so much with doing so little is cool”.
Observation and reflections
The sessions has now come to a point were there is less time used on setting up and figuring out how the functionality in the software works, and more time used on actual testing. This is an important step taking in consideration working with musicians that are introduced to the concept the first time. A good stability in software and separation between microphones makes the workflow much more effective. It still took some time to set up everything the first day due to two system crashes, the first one related to the midiator, the second one related to video streaming.
Since preparing the system beforehand there was a lot of reuse both concerning analyzing methods and the choice of effects. Even though there were a lot of reuse on the technical side the performances and results has a large variety in expressions. Even though this is not surprising we think it is an important aspect to be reminded of during the project.
Another technical workaround that was discussed concerning the analyzing stage was the possibility to operate with two different microphones on the same instrument. The idea is then to use one for reading analyses, and one for capturing the “total” sound of the instrument for use in processing. This will of course depend on which analyzing parameter in use, but will surely help for a more dynamical reading in some situations both concerning bleeding, but also for closer focus on wanted attributes.
The pedagogical approach using the four-step introduction was experienced as fruitful when introducing the concept to musicians for the first time. This helped the understanding during the process and therefor resulted in more fruitful discussions and reflections between the performers during the session. Starting with live processing says something about possibilities and flexible control over different effects early in the process, and gives the performers a possibility to be a part of deciding aesthetics and building a framework before entering the control aspect.
Quotes from the the performers:
Guitarist: “Totally different experience”. “Felt best when I just let go, but that is the hardest part”. “It feels like I’m a midi controller”. “… Hard to focus on what I’m playing”. “Would like to try out more extreme mappings”
Vocalist: “The product is so different because small things can do dramatic changes”. “Musical intention crashes with control”. “It feels like a 2-dimensional way of playing”