Modulator mapping – Cross adaptive processing as musical intervention http://crossadaptive.hf.ntnu.no Exploring radically new modes of musical interaction in live performance Tue, 27 Nov 2018 13:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 116975052 Session with Kim Henry Ortveit http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/#respond Thu, 22 Feb 2018 10:12:12 +0000 http://crossadaptive.hf.ntnu.no/?p=1192 Continue reading "Session with Kim Henry Ortveit"]]> Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a Seaboard) and some drum pads. Modulations and patterns played on one part of the instrument will determine how other components of the instrument actually sound out. This is combined with some intricate layering, looping, and quantization techniques that allows shaping of the musical continuum in novel ways. Since the instrument in itself is crossadaptive between its consituent parts (and simply also because we think it sounds great), we wanted to experiment with it within the crossadaptive project too.


Kim Henry’s instrument

The session was done as an interplay between Kim Henry on his instrument and Øyvind Brandtsegg on vocals and crossadaptive processing. It was conducted as an initial exploration of just “seeing what would happen” and trying out various ways of making crossadaptive connections here and there. The two audio examples here are excerpts of longer takes.

Take 1, Kim Henry Ortveit and Øyvind Brandtsegg

 

2018_02_Kim_take2
 2018_02_Kim_take2

Take 2, Kim Henry Ortveit and Øyvind Brandtsegg

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/feed/ 0 1192
Session with David Moss in Berlin http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/#respond Fri, 02 Feb 2018 10:37:09 +0000 http://crossadaptive.hf.ntnu.no/?p=1170 Continue reading "Session with David Moss in Berlin"]]> Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration with David Moss, as we have learned so much about performance, improvisation and music in general from him on earlier occations.
Earlier the same week I had presented the crossadaptive project for prof. De Campo’s students of computational art and performance with complex systems. This environment of arts and media studies at UdK was particularly conductive to our research, and we had some very interesting discussions.

David Moss – vocals
Øyvind Brandtsegg – crossadaptive processing, vocals
Alberto De Campo – observer
Marija Mickevica – observer, and vocals on one take

 

More details on these tracks will follow, currently I just upload them here so that the involved parties might get access.

DavidMoss_Take0
 DavidMoss_Take0

Initial exploration, the (becoming) classic reverb+delay crossadaptive situation

DavidMoss_Onefx1
 DavidMoss_Onefx1

Test session, exploring one effect only

DavidMoss_Onefx2
 DavidMoss_Onefx2

Test session, exploring one effect only  (2)

DavidMoss_Take1
 DavidMoss_Take1

First take

DavidMoss_Take2
 DavidMoss_Take2

Second take

DavidMoss_Take3
 DavidMoss_Take3

Third take

Then we did some explorations of David telling stories, live convolving with Øyvind’s impulse responses.

DavidMoss_Story0
 DavidMoss_Story0

Story 1

DavidMoss_Story1
 DavidMoss_Story1

Story 2

And we were lucky that student Marija Mickevica wanted to try recording live impulse responses while David was telling stories. Here’s an example:

DavidMoss_StoryMarija
 DavidMoss_StoryMarija

Story with Marija’s impulse responses

And a final take with David and Øyvind, where all previously tested effects and crossadaptive mappings were enabled. Selective mix of effects and modulations was controlled manually by Øyvind during the take.

DavidMoss_LastTake
 DavidMoss_LastTake

Final combined take

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/feed/ 0 1170
Crossadaptive seminar Trondheim, November 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/#respond Sun, 05 Nov 2017 18:27:58 +0000 http://crossadaptive.hf.ntnu.no/?p=1093 Continue reading "Crossadaptive seminar Trondheim, November 2017"]]> As  part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to more detailed documentation of each session as it becomes available. Each part will be added as the documentations is ready, so if something is missing now, do check back later. The seminar was also streamed here, and the recorded streams will be archived.

In addition to the researchers presenting, we also had an audience of students from the music technology and the jazz departments, as well as other researchers and teachers from NTNU. We are grateful for the input from the audience to enrich our discussions.

Program:

Thursday 2. November

Practical experiments

Introduction and status. [slides]
Øyvind Brandtsegg

 

Performance
Maja S.K. Ratkje, Øyvind Brandtsegg, Miller Puckette (standin for Stian Westerhus)

 

Work methods and session reports. Experiences, insights, reflections.
Trond Engum (with team)[slides],  Bernt Isak Wærstad (with team) [slides]

Instruments and tools

Instrumental affordances, crossadaptivity as instrumental gesture.
Marije Baalman [slides]

 


Performance
Tone Åse, Carl Haakon Waadeland, Trond Engum

 

Instrument design and technological developments. [slides]
Sigurd Saue, Victor Lazzarini, Øyvind Brandtsegg

 

Friday 3. November

Reflection. Aesthetic and philosophical issues

Documentation methods [slides]
Andreas Bergsland


Performance
Bjørnar Habbestad, Gyrid N. Kaldestad, Bernt Isak Wærstad

 

What does it mean for the performer, for the audience, for the music? How does it change the game?
Solveig Bøe [notes], Simon Emmerson [slides]

Wider use and perspectives

Experiences with Philippe Manoury and Juliana Snapper, thoughts on instrumental control, and a performance
Miller Puckette [PD patches]
(with Øyvind Brandtsegg for a brief liveconvolver performance)

 

Looking at the music from the mix perspective. Viability of effects as expression. The wider field of automatic mixing and adaptive effects.
Gary Bromham [slides], Josh Reiss [slides]

 

Outcomes and evaluation. [slides]
Moderator: Øyvind Brandtsegg

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/feed/ 0 1093
Session with 4 singers, Trondheim, August 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/10/09/session-with-4-singers-trondheim-august-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/10/09/session-with-4-singers-trondheim-august-2017/#respond Mon, 09 Oct 2017 10:28:45 +0000 http://crossadaptive.hf.ntnu.no/?p=1028 Continue reading "Session with 4 singers, Trondheim, August 2017"]]>

Location: NTNU, Studio Olavshallen.

Date: August 28 2017

Participants:
Sissel Vera Pettersen, vocals
Ingrid Lode, vocals
Heidi Skjerve, vocals
Tone Åse, vocals

Øyvind Brandtsegg, processing
Andreas Bergsland, observer and video documentation
Thomas Henriksen, sound engineer
Rune Hoemsnes, sound engineer

We also had the NTNU documentation team (Martin Kristoffersen and Ola Røed) making a separate video recording of the session.

Session objective and focus:
We wanted to try out crossadaptive processing with similar instruments. Until this session, we had usually used it on a combination of two different instruments, leading to very different analysis conditions. The analysis methods reponds a bit differently to each instrument type, and they also each “trigger” the processing in particular manner. It was thought interesting to try some experiments under more “even” conditions. Using four singers and combining them in different duo configurations, we also saw the potential for gleaming personal expressive differences and approaches to the crossadaptive performance situation. This also allowed them to switch roles, i.e. performing under the processing condition where they previously had the modulating role. No attempt was made to exhaustively try every possible combination of roles and effects, we just wanted to try a variety of scenarios possible with the current resources. The situation proved interesting in so many ways, and further exploration of this situation would be neccessary to probe further the research potential herein.
In addition to the analyzer-modulator variant of crossadaptive processing, we also did several takes of live convolution and streaming convolution. This session was the very first performative exploration of streaming convolution.

We used a reverb (Valhalla) on one of the signals, and a custom granular reverb (partikkelverb) on the other. The crossadaptive mappings was first designed so that each of the signals could have a “prolongation” effect (larger size for the reverb, more time smearing for the granular effect). However, after the first take, it seemed that the time smearing for the granular effect was not so clearly perceived as a musical gesture. We then replaced the time smearing parameter of the granular effect with a “graininess” parameter (controlling grain duration). This setting was used for the remaining takes. We used transient density combined with amplitude to control the reverb size, where louder and faster singing would make the reverb shorter (smaller). We used dynamic range to control the time smearing parameter of the granular effect, and used transient density to control the grain size (faster singing makes the grains shorter).

Video digest of the session

Crossadaptive analyzer-modulator takes

ed_XA 01
 ed_XA 01

Crossadaptive take 1: Heidi/Ingrid
Heidi has a reverb controlled by Ingrids amplitude and transient density
– louder and faster singing makes the reverb shorter
Ingrid has a time smearing effect.
– time is more slowed down when Heidi use a larger dynamic range

 

ed_XA 02
 ed_XA 02

Crossadaptive take 2: Heidi/Sissel
Heidi has a reverb controlled by Sissels amplitude and transient density
– louder and faster singing makes the reverb shorter
Sissel has a granular effect.
– the effect is more grainy (shorter grain duration) when Heidi play with a higher transient density (faster)

 

ed_XA 03
 ed_XA 03

Crossadaptive take 3: Sissel/Tone
Sissel has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Sissel play with a higher transient density (faster)

 

ed_XA 04
 ed_XA 04

Crossadaptive take 4: Tone/Ingrid
Ingrid has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Ingrid play with a higher transient density (faster)

 

ed_XA 05
 ed_XA 05

Crossadaptive take 5: Tone/Ingrid
Same settings as for take 4

Convolution

Doing live convolution with two singers was thought interesting for the same reasons as listed in the introduction, creating a controlled scenario with two similarly-featured signals. As vocal is in itself one of the richest instruments in terms of signal variation, it was also intersting to explore convolution wwith these instruments. We used the now familiar live convolution techniques, where one of the performers record an impulse response and the other plays through it. In addition, we explored streaming convolution, developed by Victor Lazzarini as part of this project. In streaming convolution, the two signals are treated even more equally that what is the case in live convolution. Streaming convolution simply convolves two circular buffers of a predetermined length, allowing both signals to have the exact same role in relation to the other. It also has a “freeze mode”, where updating of the buffer is suspended, allowing one or the other (or both) of the signals to be kept stationary as a filter for the other. This freezing was controlled by a physical pedal, in the same manner as we use a pedal to control IR sampling with live convolution. In some of the videos one can see the singers raising their hand, as a signal to the other that they are now freezing their filter. When the signal is not frozen (i.e. streaming), there is a practically indeterminate latency in the process as seen from the performer’s perspective. This stems from the fact that the input stream is segmented with respect to the filter length. Any feature recorded into the filter will have a position in the filter dependent on when it was recorded, and the perceived latency between an input impulse and the convolver output of course relies on where in the “impulse response” the most significant energy or transient can be found. The techical latency of the filter is still very low, but the perceived latency depends on the material.

ed_LC 01
 ed_LC 01

Liveconvolver take 1: Tone/Sissel
Tone records the IR

 

ed_LC 02
 ed_LC 02

Liveconvolver take 2: Tone/Sissel
Sissel records the IR

 

ed_LC 03
 ed_LC 03

Liveconvolver take 3: Heidi/Sissel
Sissel records the IR

 

ed_LC 04
 ed_LC 04

Liveconvolver take 4: Heidi/Sissel
Heidi records the IR

 

ed_LC 05
 ed_LC 05

Liveconvolver take 5: Heidi/Ingrid
Heidi records the IR

Streaming Convolution

These are the very first performative explorations of the streaming convolution technique.

ed_TVC
 ed_TVC

Streaming convolution take 1: Heidi/Sissel

 

ed_TVC2
 ed_TVC2

Streaming convolution take 2: Heidi/Tone

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/10/09/session-with-4-singers-trondheim-august-2017/feed/ 0 1028
Session in UCSD Studio A http://crossadaptive.hf.ntnu.no/index.php/2017/09/08/session-in-ucsd-studio-a/ http://crossadaptive.hf.ntnu.no/index.php/2017/09/08/session-in-ucsd-studio-a/#respond Fri, 08 Sep 2017 21:56:29 +0000 http://crossadaptive.hf.ntnu.no/?p=962 Continue reading "Session in UCSD Studio A"]]> This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all these performers in different constellations, some new combinations were tested this day. The approach was to explore fairly complex feature-modulator mappings. No particular focus was made on intellectualizing the details of these mappings, but rather experiencing them as a whole, “as instrument”. I had found that simple mappings, although easy to decode and understand for both performer and listener, quickly would “wear out” and become flat, boring or plainly limiting for musical development during the piece. I attempted to create some “rich” mappings, with combinations of different levels of subtlety. Some clearly audible and some subtle timbral effects. The mappings were designed with some specific musical gestures and interactions in mind, and these are listed together with the mapping details for each constellation later in this post.

During this session, we also explored the live convolver in terms of how the audio content in the IR affects the resulting creative options and performative environment for the musician playing through the effect. The liveconvolver takes are presented interspersed with the crossadaptive “feature-modulator” (one could say “proper crossadaptive”) takes. Recording of the impulse response for the convolution was triggered via an external pedal controller during performance, and we let each musician in turn have the role of IR recorder.

Participants:
Jordan Morton: double bass and voice
Miller Puckette: guitar
Steven Leffue: sax
Kyle Motl: double bass
Oeyvind Brandtsegg: crossadaptive mapping design, processing
Andrew Munsie: recording engineer

The music played was mostly free improvisations, but two of the takes with Jordan Morton was performances of her compositions. These were composed in dialogue with the system, during and in between, earlier sessions. She both plays the bass and sings, and wanted to explore how phrasing and shaping of precomposed material could be used to expressively control the timbral modulations of the effects processing.

Jordan Morton: bass and voice.

These pieces are composed by Jordan, and she has composed it with an intention of being performed freely, and shaped according to the situation at performance time, allowing the crossaptive modulations ample room for influence on the sound.

Jordan Morton
I confess
Jordan Morton I confess

“I confess” (Jordan Morton). Bass and voice.

 

Jordan Morton
Backbeat thing
Jordan Morton Backbeat thing

“Backbeat thing” (Jordan Morton). Bass and voice.

 

The effects used:
Effects on vocals: Delay, Resonant distorted lowpass
Effects on bass: Reverb, Granular tremolo

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Bass spectral flatness, and
  • Bass spectral flux: both features giving lesser reverb time on bass

Purpose: When the bass becomes more noisy, it will get less reverb

  • Vocal envelope dynamics (dynamic range), and
  • Vocal transient density: both features giving lower lowpass filter cutoff frequency on reverb on bass

Purpose: When the vocal becomes more active, the bass reverb will be less pronounced

  • Bass transient density: higher cutoff frequency (resonant distorted lowpass filter) on vocal

Purpose: to animate a distorted lo-fi effect on the vocals, according to the activity level on bass

  • Vocal mfcc-diff (formant strength, “pressed-ness”): Send level for granular tremolo on bass

Purpose: add animation and drama to the bass when the vocal becomes more energetic

  • Bass transient density: lower lowpass filter frequency for the delay on vocal

Purpose: clean up vocal delays when basse becomes more active

  • Vocal transient density: shorter delay time for the delay on vocal
  • Bass spectral flux: longer delay time for the delay on vocal

Purpose: just for animation/variation

  • Vocal dynamic range, and
  • Vocal transient density: both features giving less feedback for the delay on vocal

Purpose: clean up vocal delay for better articulation on text

 

Liveconvolver tracks Jordan/Jordan:

The tracks are improvisations. Here, Jordan’s voice was recorded as the impulse response and she played bass through the voice IR. Since she plays both instruments, this provides a unique approach to the live convolution performance situation.

Liveconvolver bass/voice
 Liveconvolver bass/voice

Liveconvolver take 1: Jordan Morton bass and voice

 

Liveconvolver bass/voice 2
 Liveconvolver bass/voice 2

Liveconvolver take 2: Jordan Morton bass and voice

 

Jordan Morton and Miller Puckette

Liveconvolver tracks Jordan/Miller:

These tracks was improvised by Jordan Morton (bass) and Miller Puckette (guitar). Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

20170511-Brandtsegg-Tk-12-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-12-Edit-A-Mix-V1

Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Miller records the IR.

20170511-Brandtsegg-Tk-14-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-14-Edit-A-Mix-V1

Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Jordan records the IR.

 

Discussion on the performance with live convolution, with Jordan Morton and  Miller Puckette.

Miller Puckette and Steven Leffue

These tracks was improvised by Miller Puckette (guitar) and Steven Leffue. The feature-modulator mapping was designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping before the first take. The intention of this strategy was to create an naturally flowing environment of exploration, with not-too-obvious relationships between instrumental gestures and resulting modulations. After the first take, some more detail of selected elements (one for each musician) of the mapping were repeated for the performers, with the anticipation that these features might be explored more consciously.

Take 1:

20170511-Brandtsegg-Tk-18-Edit-A-Mix-V1b
 20170511-Brandtsegg-Tk-18-Edit-A-Mix-V1b

Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 1.  Details of the feature-modulator mapping is given below.

Discussion 1 on the crossadaptive performance, with Miller Puckette and Steven Leffue. On the relationship between what you play and how that modulates the effects, on balance of monitoring, and other issues.

The effects used:
Effects on guitar: Spectral delay
Effects on sax: Resonant distorted lowpass, Spectral shift, Reverb

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Guitar envelope crest: longer reverb time on sax

Purpose: dynamic guitar playing will make a big room for the sax

  • Guitar transient density: higher cutoff frequency for reverb highpass filter and lower cutoff frequency for reverb lowpass filter

Purpose: when guitar is more active, the reverb on sax will be less full (less highs and less lows)

  • Guitar transient density (again): downward spectral shift on sax

Purpose: animation and variation

  • Guitar spectral flux: higher cutoff frequency (resonant distorted lowpass filter) on sax

Purpose: just for animation and variation. Note that spectral flux (especially on the guitar) will also give high values on single notes in the low register (the lowest octave), in addition to the expected behaviour of giving higher values on more noisy sounds.

  • Sax envelope crest: less delay send on guitar

Purpose: more dynamic sax playing will “dry up” the guitar delays, must play long notes to open the sending of guitar to delay

  • Sax transient density: longer delay time on guitar. This modulation mapping was also gated by the rms amplitude of the sax (so that it is only active when sax gets loud)

Purpose: load and fast sax will give more distinct repetitions (further apart) on the guitar delay

  • Sax pitch: increase spectral delay shaping of the guitar (spectral delay with different delay times for each spectral band)

Purpose: more unnatural (crazier) effect on guitar when sax goes high

  • Sax spectral flux: more feedback on guitar delay

Purpose: noisy sax playing will give more distinct repetitions (more repetitions) on the guitar delay

Take 2:

20170511-Brandtsegg-Tk-21-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-21-Edit-A-Mix-V1

Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 2. The feature-modulator mapping was the same as for take 1.

Discussion 2 on the crossadaptive performance, with Miller Puckette and Steven Leffue. Instructions and intellectualizing the mapping made it harder

Liveconvolver tracks:

Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

20170511-Brandtsegg-Tk-22-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-22-Edit-A-Mix-V1

Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Miller records the IR.

Discussion 1 on playing with the live convolver, with Miller Puckette and Steven Leffue.

20170511-Brandtsegg-Tk-23-Edit-A-Mix-V1
 20170511-Brandtsegg-Tk-23-Edit-A-Mix-V1

Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Steven records the IR.

Discussion 2 on playing with the live convolver, with Miller Puckette and Steven Leffue.

 

Steven Leffue and Kyle Motl

Two different feature-modulator mappings was used, and we present one take of each mapping.  Like the mappings used for Miller/Steven, these were designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping. The mapping used for the first take closely resembles the mapping for Steven/Miller, with just a few changes to accomodate for the different musical context and how the analysis methods responds to the instruments.

  • Bass transient density: shorter reverb time on sax
  • The reverb equalization (highpass and lowpass was skipped
  • Bass envelope crest: increase send level for granular processing on sax
  • Bass rms amplitude: Parametric morph between granular tremolo and granular time stretch on sax

 

Crossdaptive take 1 Steven / Kyle
 Crossdaptive take 1 Steven / Kyle

 

In the first crossadaptive take in this duo, Kyle commented that the amount of delay made it hard to play, that any fast phrases just would turn into a mush. It seemed the choice of effects and the modulations was not optimal, so we tried another configuration of effects (and thus another mapping of features to modulators)

 

Crossadaptive take 2 Steven / Kyle
 Crossadaptive take 2 Steven / Kyle

 

This mapping had earlier been used for duo playing between Kyle (bass) and Øyvind (vocal) on several occations, and it was merely adjusted to accomodate for the different timbral dynamics of the saxophone. In this way, Kyle was familiar with the possibilities of the mapping, but not with the context in which it would be used.
The granular processing done on both instrument was done with the Hadron Particle Synthesizer, which allows a multidimensional parameter navigation through a relatively simple modulation interface (X, Y and 4 expression controllers). The specifics of the actual modulation routing and mapping within Hadron can be described, but it was thought that in the context of the current report, further technical detail would only take away from the clarity of the presentation. Even though the details of the parameter mapping was designed deliberately, at this point in the performative approach to playing with it, we just did no longer pay attention to technical specifics. Rather, the focus was on letting go and trying to experience the timbral changes rather than intellectualizing them.

The effects used:
Effects on sax: Delay, granular processing
Effects on bass: Reverb, granular processing

The features and the modulator mappings:
(also stating an intended purpose for each mapping)

  • Sax envelope crest: shorter reverb time on bass
  • Sax rms amp: higher cutoff frequency for reverb highpass filter

Purpose: louder sax will make the bass reverb thinner

  • Sax transient density: lower cutoff frequency for reverb lowpass filter
  • Sax envelope dynamics (dynamic range): higher cutoff frequency for reverb lowpass filter

Purpose: faster sax playing will make the reverb less prominent, but more dynamic playing will enhance it

  • Sax spectral flux: Granular processing state morph (Hadron X-axis) on bass
  • Sax envelope dynamics: Granular processing state morph (Hadron Y-axis) on bass
  • Sax rms amplitude: Granular processing state morph (Hadron Y-axis) on bass

Purpose: animation and variation

  • Bass spectral flatness: higher cutoff frequency of the delay feedback path on sax
    Purpose: more noisy bass playing will enhance delayed repetitions
  • Bass envelope dynamics: less delay feedback on sax
    Purpose: more dynamic playing will give less repetitions in delay on sax
  • Bass pitch: upward spectral shift on sax

Purpose: animation and variation, pulling in same direction (up pitch equals shift up)

  • Bass transient density: Granular process expression 1 (Hadron) on sax
  • Bass rms amplitude: Granular process expression 2 & 3 (Hadron) on sax
  • Bass rhythmic irregularity: Granular process expression 4 (Hadron) on sax
  • Bass MFCC diff: Granular processing state morph (Hadron X-axis) on sax
  • Bass envelope crest: Granular processing state morph (Hadron Y-axis) on sax

Purpose: multidimensional and rich animation and variation


On the second crossadaptive take between Steven and Kyle, I asked: “Does this hinder interaction or does or make something interesting happen?”
Kyle says it hinders the way they would normally play together. “We can’t go to our normal thing because there’s a third party, the mediation in between us. It is another thing to consider.” Also, the balance between the acoustic sound and the processing is difficult. This is even more difficult when playing with headphones, as the dynamic range and response is different. Sometimes the processing will seem very quiet in relation to the acoustic sound of the instruments, and at other times it will be too loud.
Steven says at one point he started not paying attention to the processing and focused mostly on what Kyle was doing. “Just letting the processing be the reaction to that, not treating it as an equal third party. … Totally paying attention to what the other musician is doing and just keeping up with him, not listening to myself.” This also mirrors the usual options of improvisational listening strategy and focus, of listening to the whole or focusing on specific elements in the resulting sound image.

Longer reflective conversation between Steven Leffule, Kyle Motl and Øyvind Brandtsegg. Done after the crossadaptive feature-modulator takes, touching on some of the problems encountered, but also reflecting on the wider context of different kinds of music accompaniment systems.

Liveconvolver tracks:

Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.

 

Liveconvolver take 1 Steven / Kyle
 Liveconvolver take 1 Steven / Kyle

 

Liveconvolver take 2 Steven / Kyle
 Liveconvolver take 2 Steven / Kyle

 

Discussion 1 on playing with the live convolver, with Steven Leffue and Kyle Motl.

Discussion 2 on playing with the live convolver, with Steven Leffue and Kyle Motl.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/09/08/session-in-ucsd-studio-a/feed/ 0 962
Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/#comments Wed, 07 Jun 2017 12:47:40 +0000 http://crossadaptive.hf.ntnu.no/?p=759 Continue reading "Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017"]]> Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice)

The focus for this session was to play with, fine tune and work further on the mappings we sat up during the last session at NMH in November. Due to practical reasons, we had to split the session into two half days on 13th and 19th of January

13th of January 2017

We started by analysing 4 different musical gestures for the guitar, which was skipped due to time constraints during the last session. During this analysis we found the need to specify the spread of the analysis results in addition to the region. This way we could differentiate the analysis results in terms of stability and conclusiveness. We decided to analyse the flute and vocal again to add the new parameters.

19th of January 2017

After the analysis was done, we started working on a mapping scheme which involved all 3 instruments, so that we could play in a trio setup. The mappings between flute and vocal where the same as in the November session

The analyser was still run in Reaper, but all routing, effects chain and mapping (MIDIator) was now done in Live. Because of software instability (the old Reaper projects from November wouldn’t open) and change of DAW from Reaper to Live, meant that we had to set up and tune everything from scratch.

Sound examples with comments and immediate reflections

1. Guitar & Vocal – First duo test, not ideal, forgot to mute analyser.

2. Guitar & Vocal retake – Listened back on speakers after recording. Nice sounding. Promising.

Reflection: There seems to be some elements missing, in a good way, meaning that there is space left for things to happen in the trio format. There is still need for fine-tuning of the relationship between guitar and vocal. This scenario stems from the mapping being done mainly with the trio format in mind.

3. Vocals & flute – Listened back on speakers after recording.

Reflections: dynamic soundscape, quite diverse results, some of the same situations as with take 2, the sounds feel complementary to something else. Effect tuning: more subtle ring mod (good!) compared to last session, the filter on vocals is a bit too heavy-handed. Should we flip the vocal filter? This could prevent filtering and reverb taking place simultaneously. Concern: is the guitar/vocal relationship weaker compared to vocal/flute? Another idea comes up – should we look at connecting gates or bypasses in order to create dynamic transitions between dry and processed signals?

4.Flute & Guitar

Reflections: both the flute ring mod and git delay are a bit on the heavy side, not responsive enough. Interesting how the effect transformations affect material choices when improvising.

5.Trio

Comments and reflections after the recording session

It is interesting to be in a situation where you, as you play, are having multi-layered focuses- playing, listening, thinking of how you affect the processing of your fellow musicians and how your sound is affected and trying to make something worth listening to. Of course we are now in an “etyde- mode”, but still striving for the goal, great output!

It seems to be a bug in the analyser tool when it comes to being consistent. Sometimes some parameters fall out. We experienced that it seems to be a good idea to run the analyse a couple of times for each sound to get the most precise result.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/feed/ 2 759
Cross adaptive session with 1st year jazz students, NTNU, March 7-8 http://crossadaptive.hf.ntnu.no/index.php/2017/04/06/cross-adaptive-session-with-1st-year-jazz-students-ntnu-march-7-8/ http://crossadaptive.hf.ntnu.no/index.php/2017/04/06/cross-adaptive-session-with-1st-year-jazz-students-ntnu-march-7-8/#comments Thu, 06 Apr 2017 13:46:21 +0000 http://crossadaptive.hf.ntnu.no/?p=784 Continue reading "Cross adaptive session with 1st year jazz students, NTNU, March 7-8"]]> This is a description of a session with first year jazz students at NTNU recorded March 7 and 8. The session was organized as part of the ensemble teaching that is given to jazz students at NTNU, and was meant to take care of both the learning outcomes from the normal ensemble teaching, and also aspects related to the cross adaptive project.

Musicians:

Håvard Aufles, Thea Ellingsen Grant, Erlend Vangen Kongstorp, Rino Sivathas, Øyvind Frøberg Mathisen, Jonas Enroth, Phillip Edwards Granly, Malin Dahl Ødegård and Mona Thu Ho Krogstad.

Processing musician:

Trond Engum

Video documentation:

Andreas Bergsland

Sound technician:
Thomas Henriksen

 

Video digest from the session:

Preparation:

Based on our earlier experiences with bleeding between microphones we located instruments in separate rooms. Since there was quit a big group of different performers it was important that changing set-up took as little time as possible. There was also prepared a system set-up beforehand based on the instruments in use. To gain an understanding of the project from the performer side as early in the process as possible we used the same four step chronology when introducing the performers to the set-up.

  1. Start with individual instruments trying different effects through live processing and decide together with the performers what effects most suitable to add to their instrument.
  2. Introducing the analyser and decide, based on input form the performers, which methods best suited for controlling different effects from their instrument.
  3. Introducing adaptive processing were one performer is controlling the effects on the other, and then repeat vice versa.
  4. Introducing cross-adaptive processing were all previous choices and mappings are opened up for both performers.

 

Session report:

Day 1. Tuesday 7th March

Trumpet and drums

Sound example 1: (Step 1) Trumpet live processed with two different effects, convolution (impulse response from water) and overdrive.

 

The performer was satisfied with the chosen effects, also because the two were quite different in sound quality. The overdrive was experienced as nice, but he would not like to have it present all the time. We decided to save these effects for later use on trumpet, and be aware of dynamic control on the overdrive.

 

Sound example 2: (Step 1) Drums live processed with dynamically changing delay and a pitch shift 2 octaves down. The performer found the chosen effects interesting, and the mapping was saved for later use.

 

Sound example 3: (Step 1) Before entering the analyser and adaptive processing we wanted to try playing together with the effects we had chosen to see if they blended well together. The trumpet player had some problems with hearing the drums during the performance, felt as they were a bit in the background. We found out that the direct sound of the drums was a bit low in the mix, and this was adjusted. We discussed that it is possible to make the direct sound of both instruments louder or softer depending what the performer wants to achieve.

 

Sound example 4. (Step 2/3) For this example we entered into the analyser using transient density on drums. This was tried out by showing the analyser at the same time as doing an accelerando on drums. This was then set up as an adaptive control from drums on the trumpet. For control, the trumpet player had a suggestion that the more transient density the less convolution effect was added to the trumpet (less send to a convolution effect with a recording of water). The reason for this was that it could make more sense to have more water on slow ambient parts than on the faster hectic parts. At the same time he suggested that the opposite should happen when adding overdrive to the trumpet by transient density meaning that the more transient density the more overdrive on the trumpet. During the first take a reverb was added to the overdrive in order to blend the sound more into the production. It felt like the dynamical control over the effects was a bit difficult because the water disappeared to easily, and the overdrive was introduced to easily. We agreed to fine-tune the dynamical control before doing the actual test that is present as sound example 4.

 

Sound example 5: For this example we changed roles and enabled the trumpet to control the drums (adaptive processing). We followed a suggestion from the trumpet player and used pitch as an analyses parameter. We decided to use this to control the delay effect on the drums. Low notes produced long gaps between delays, whereas high notes produced small gap between delays. This was maybe not the best solution for getting good dynamical control, but we decide to keep this anyway.

 

Sound example 6: Cross adaptive performance using the effects and control mappings introduced in example 4 and 5. This was a nice experience for the musicians. Even though it still felt a bit difficult to control it was experienced as musical meaningful. Drummer: “Nice to play a steady grove, and listen to how the trumpet changed the sound of my instrument”.

 

Vocals and piano

Sound example 7: We had now changed the instrumentation over to vocals and piano, and we started with a performance doing live processing on both instruments. The vocals were processed using two different effects using a delay, and convolution through a recording of small metal parts. The piano was processed using an overdrive and convolution through water.

 

Sound example 8: Cross adaptive performance where the piano was analysed by rhythmical consonance controlling the delay effect on vocals. The vocal was analysed by transient density controlling the convolution effect on the piano. Both musicians found this difficult, but musically meaningful. Sometimes the control aspect was experienced as counterintuitive to the musical intention. Pianist: It felt like there was a 3rd musician present.

 

Saxophone self-adaptive processing

Sound example 9: We started with a performance doing live processing to familiarize the performer with the effects. The performer found the augmentation of extended techniques as clicks and pops interesting since this magnified “small” sounds.

 

Sound example 10: Self-adaptive processing performances where the saxophone was analysed by transient density and then used to control two different convolution effects (recording of metal parts and recording of a cymbal). The first one resulting in a delay effect the second as a reverb. The higher transient density in the analyses the more delay and less reverb and vice versa. The performer experienced the quality of the effects quit similar so we removed the delay effect.

 

Sound example 11: Self-adaptive processing performances using the same set-up but changing the delay effect to overdrive. The use of overdrive on saxophone did not bring anything new to the table the way it was set up since the acoustic sound of the instrument could sound similar to the effect when putting in strong energy.

 

Day 2. Wednesday 8th March

 

Saxophone and piano

Sound example 12: Performance with saxophone and live processing, familiarizing the performer with the different effects and then choose which of the effects to bring further into the session. Performer found this interesting and wanted to continue with reverb ideas.

 

Sound example 13: Performance with piano and live processing. The performer especially liked the last part with the delays – Saxophonist: “It was like listening to the sound under water (convolution with water) sometimes, and sometimes like listening to an old radio (overdrive)”. Piano wanted to keep the effects that were introduced.

 

Sound example 14: Adaptive processing, controlling delay on saxophone from the piano by using analyses of the transient density. The higher transient density, the larger gap between delays on the saxophone. The saxophone player found it difficult to interact since the piano had a clean sound during performance. The piano on the other hand felt in control over the effect that was added.

 

Sound example 15: Adaptive processing using saxophone to control piano. We analyzed the rhythmical consonance on saxophone. The higher degree of consonance, the more convolution effect (water) was added to piano and vice versa. Saxophone didn’t feel in control during performance, and guessed it was due to not holding a steady rhythm over a longer period. The direct sound of the piano was also a bit loud in the mix making the added effect a bit low in the mix. Piano felt that saxophone was in control, but agreed to the point that the analyses was not able to read to the limit because of the lack of a steady rhythm over a longer time period.

 

Sound example 16: Crossadptive performance using the same set-up as in example 14 and 15. Both performers felt in control, and started to explore more of the possibilities. Interesting point when the saxophone stops to play since the rhythmical consonance analyses will make a drop as soon as it starts to read again. This could result in strong musical statements.

 

Sound example 17: Crossadaptive performance keeping the same setting but adding rms analyses on the saxophone to control a delay on the piano (the higher rms the less delay and vice versa).

 

Vocals and electric guitar

Sound example 18: Performance with vocals and live processing. Vocalist: “It is fun, but something you need to get use to, needs a lot of time”.

 

Sound example 19: Performance with Guitar and live processing. Guitarist: “Adapted to the effects, my direct sound probably sounds terrible, feel that I`m loosing my touch, but feels complementary and a nice experience”.

 

Sound example 20: Performance with adaptive processing. Analyzing the guitar using rms and transient density. The higher transient density the more delay added to the vocal, and higher rms the less reverb added to the vocal. Guitar: I feel like a remote controller and it is hard to focus on what I play sometimes. Vocalist: “Feels like a two dimensional way of playing”.

 

Sound example 21: Performance with adaptive processing. Controlling the guitar by vocals. Analyzing the rhythmical consonance on the vocal to control the time gap between delays inserted on the guitar. Higher rhythmical consonance results in larger gaps and vice versa. The transient density on vocal controls the amount of pitch shift added to the guitar. The higher transient density the less volume is sent to the pitch shift.

 

Sound example 22: Performance with cross adaptive processing using the same settings as in sound example 20 and 21.

Vocalist: “It is another way of making music, I think”. Guitarist: “I feel control and I feel my impact, but musical intention really doesn’t fit with what is happening – which is an interesting parameter. Changing so much with doing so little is cool”.

 

Observation and reflections

The sessions has now come to a point were there is less time used on setting up and figuring out how the functionality in the software works, and more time used on actual testing. This is an important step taking in consideration working with musicians that are introduced to the concept the first time. A good stability in software and separation between microphones makes the workflow much more effective. It still took some time to set up everything the first day due to two system crashes, the first one related to the midiator, the second one related to video streaming.

 

Since preparing the system beforehand there was a lot of reuse both concerning analyzing methods and the choice of effects. Even though there were a lot of reuse on the technical side the performances and results has a large variety in expressions. Even though this is not surprising we think it is an important aspect to be reminded of during the project.

 

Another technical workaround that was discussed concerning the analyzing stage was the possibility to operate with two different microphones on the same instrument. The idea is then to use one for reading analyses, and one for capturing the “total” sound of the instrument for use in processing. This will of course depend on which analyzing parameter in use, but will surely help for a more dynamical reading in some situations both concerning bleeding, but also for closer focus on wanted attributes.

 

The pedagogical approach using the four-step introduction was experienced as fruitful when introducing the concept to musicians for the first time. This helped the understanding during the process and therefor resulted in more fruitful discussions and reflections between the performers during the session. Starting with live processing says something about possibilities and flexible control over different effects early in the process, and gives the performers a possibility to be a part of deciding aesthetics and building a framework before entering the control aspect.

 

Quotes from the the performers:

Guitarist: “Totally different experience”. “Felt best when I just let go, but that is the hardest part”. “It feels like I’m a midi controller”. “… Hard to focus on what I’m playing”. “Would like to try out more extreme mappings”

Vocalist: “The product is so different because small things can do dramatic changes”. “Musical intention crashes with control”. “It feels like a 2-dimensional way of playing”

Pianoist: “Feels like an extra musician”

 

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/04/06/cross-adaptive-session-with-1st-year-jazz-students-ntnu-march-7-8/feed/ 1 784
Seminar on instrument design, software, control http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/ http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/#respond Fri, 24 Mar 2017 20:33:12 +0000 http://crossadaptive.hf.ntnu.no/?p=776 Continue reading "Seminar on instrument design, software, control"]]> Online seminar March 21

Trond Engum and Sigurd Saue (Trondheim)
Bernt Isak Wærstad (Oslo)
Marije Baalman (Amsterdam)
Joshua Reiss (London)
Victor Lazzarini (Maynooth)
Øyvind Brandtsegg (San Diego)

Instrument design, software, control

We now have some tools that allow practical experimentation, and we’ve had the chance to use them in some sessions. We have some experience as to what they solve and don’t solve, how simple (or not) they are to use. We know that they are not completely stable on all platforms, there are some “snags” on initialization and/or termination that give different problems for different platforms. Still, in general, we have just enough to evaluate the design in terms of instrument building, software architechture, interfacing and control.

We have identified two distinct modes of working crossadaptively: The Analyzer-Modulator workflow, and a Direct-Cross-Synthesis workflow. The Analyzer-Modulator method is comprised of extracting features, and arbitrarily mapping these features as modulators to any effect parameter. The Direct-Cross-Synthesis method is comprised by a much closer interaction directly on the two audio signals, for example as seen with the liveconvolver and or different forms of adaptive resonators. These two methods give very different ways of approaching the crossadaptive interplay, with the direct-cross-synthesis method being perceived as closer to the signal, and as such, in many ways closer to each other for the two performers. The Analyzer-Modulator approach allows arbitrary mappings, and this is both a strngth and a weakness. It is powerful by allowing any mapping, but it is harder to find mappings that are musically and performatively engaging. At least this can be true when a mapping is used without modification over a longer time span. As a further extension, an even more distanced manner of crossadaptive interplay was recently suggested by Lukas Ligeti (UC Irvine, following Brandtsegg’s presentation of our project there in January). Ligeti would like to investigate crossadaptive modulation on MIDI signals between performers. The mapping and processing options for event-based signals like MIDI would have even more degrees of freedom than what we achieve with the Analyzer-Modulator approach, and it would have an even greater degree of “remoteness” or “disconnectedness”. For Ligeti, one of the interesting things is the diconnectedness and how it affects our playing. In perspective, we start to see some different viewing angles on how crossadaptivity can be implemented and how it can influence communication and performance.

In this meeting we also discussed problems of the current tools, mostly concerned with the tools of the Analyzer-Moduator method, as that is where we have experienced the most obvious technical hindrance for effecttive exploration. One particular problem is the use of MIDI controller data as our output. Even though it gives great freedom in modulator destinations, it is not straightforward for a system operator to keep track of which controller numbers are actively used and what destinations they correspond to. Initial investigations of using OSC in the final interfacing to the DAW have been done by Brandtsegg, and the current status of many DAWs seems to allow “auto-learning” of OSC addresses based on touching controls of the modulator destination within the DAW. a two-way communication between the DAW and ouurr mapping module should be within reach and would immensely simplify that part of our tools.
We also discussed the selection of features extracted by the Analyzer, whic ones are more actively used, if any could be removed and/or if any could be refined.

Initial comments

Each of the participants was invited to give their initial comments on these issues. Victor suggests we could rationalize the tools a bit, simplify, and get rid of the Python dependency (which has caused some stability and compatibility issues). This should be done without loosing flexibility and usability. Perhaps a turn towards the originally planned direction of reying basically on Csound for analysis instead of external libraries. Bernt has had some meetings with Trond recently and they have some common views. For them it is imperative to be able to use Ableton Live for the audio processing, as the creative work during sessions is really only possible using tools they are familiar with. Finding solutions to aesthetic problems that may arise require quick turnarounds, and for this to be viable, familiar processing tools.  There have been some issues related to stability in Live, which have sometimes significantly slowed down or straight out hindered an effective workflow. Trond appreciates the graphical display of signals, as it helps it teaching performers how the analysis responds to different playing techniques.

Modularity

Bernt also mentions the use of very simple scaled-down experiments directly in Max, done quickly with students. It would be relatively easy to make simple patches that combines analysis of one (or a few) features with a small number of modulator parameters. Josh and Marije also mentions modularity and scaling down as measures to clean up the tools. Sigurd has some other perspectives on this, as it also relates to what kind of flexibility we might want and need, how much free experimentation with features, mappings and desintations is needed, and also to consider if we are making the tools for an end user or for the research personell within the project. Oeyvind also mentions some arguments that directly opposes a modular structure, both in terms of the number of separate plugins and separate windows needed, and also in terms of analyzing one signal with relation to activity in another (f.ex. for cross-bleed reduction and/or masking features etc).

Stability

Josh asks about the stability issues reported. any special feature extractors, or other elements that have been identified that triggers instabilities. Oeyvind/Victor discuss a bit about the Python interface, as this is one issue that frequently come up in relation to compatibility and stability. There are things to try, but perhaps the most promising route is to try to get rid of the Python interface. Josh also asks about the preferred DAW used in the project, as this obviously influence stability. Oeyvind has good experience with Reaper, and this coincides with Josh’s experience at QMUL. In terms of stability and flexibility of routing (multichannel), Reaper is the best choice. Crossadaptive work directly in Ableton Live can be done, but always involve a hack. Other alternatives (Plogue Bidule, Bitwig…) are also discussed briefly. Victor suggests selecting a reference set of tools, which we document well in terms of how to use them in our project. Reaper has not been stable for Bernt and Trond, but this might be related to  setting of specific options (running plugins in separate/dedicated processes, and other performance options). In any case, the two DAWs of immediate practical interest is Reaper (in general) and Live (for some performers).  An alternative to using a DAW to host the Analyzer might also be to create a standalone application, as a “server”, sending control signals to any host. There are good reasons for keeping it within the DAW, both as session management (saving setups)  and also for preprocessing of input signals (filtering, dynamics, routing).

Simplify

Some of the stability issues can be remedied by simplifying the analyzer, getting rid of unused features, and also getting rid of the Python interface. Simplification will also enable use for less trained users, as it enable self-education and ability to just start using it and experiment. Modularity might also enhance such self-education, but a take on “modularity” might simply hiding irrelevant aspects of the GUI.
In terms of feature selection the filtering of GUI display (showing only a subset) is valuable. We see also that the number of actively used parameters is generally relatively low, our “polyphonic attention” for following independent modulations generally is limited to 3-4 dimensions.
It seems clear that we have some overshoot in terms of flexibility and number of parameters in the current version of our tools.

Performative

Marije also suggests we should investigate further what happens on repeated use. When the same musicians use the same setup several times over a period of time, working more intensively, just play, see what combinations wear out and what stays interesting. This might guide us in general selection of valuable features. Within a short time span (of one performence), we also touched briefly on the issue of using static mappings as opposed to changing the mapping on the fly. Giving the system operator a more expressive role, might also solve situations where a particular mapping wears our or becomes inhibiting over time. So far we have created very repeatable situations, to investigate in detail how each component works. Using a mapping that varies over time can enable more interesting musical forms, but will also in general make the situation more complex. Remembering how performers in general can respond positively to a certain “richness” of the interface (tolerating and even being inspired by noisy analysis), perhaps varying the mapping over time also can shift the attention more on to the sounding result and playing by ear holistically, than intellectually dissecting how each component contributes.
Concluding remarks also suggests that we still need to play more with it, to become more proficient, having more control, explore and getting used to (and getting tired of) how it works.

 

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/feed/ 0 776
Conversation with Marije, March 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/03/20/conversation-with-marije-march-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/03/20/conversation-with-marije-march-2017/#respond Mon, 20 Mar 2017 21:34:40 +0000 http://crossadaptive.hf.ntnu.no/?p=760 Continue reading "Conversation with Marije, March 2017"]]> After an inspiring talk with Marije on March 3rd, I set out to write this blog post to sum up what we had been talking about. As it happens (and has happened before), Marije had a lot of pointer to other related works and writings. Only after I had looked at the material she pointed to, and reflected upon it, did I get around to writing this blog post. So substantial parts of it contains more of a reflection after the conversation, rather than an actual report of what was said directly.
Marije mentiones we have done a lot of work, it is inspiring, solid, looks good.

Agency, focus of attention

One of the first subjects in our conversation was how we relate to the instrument. For performers: How does it work? Does it work? (does it do what we say/think it does?) What do I control? What controls me? when successful it might constitute a 3rd agency, a shared feeling, mutual control. Not acting as a single musician, but as an ensemble. The same observation can of course be made (when playing) in acoustic ensembles too, but it is connected differently in our setting.

Direct/indirect control. Play music or generate control signals? Very direct and one-dimensional mappings can easily feel like playing to generate control signals. Some control signals can be formed (by analyzing) over longer time spans, as they represent more of a “situation” than an immediate “snapshot”. Perhaps just as interesting for a musician to outline a situation over time, than to simply control one sonic dimension by acting on another?

Out-of-time’d-ness, relating to the different perceptions of the performative role experienced in IR recording (see posts on convolution sessions here, here and here). A similar experience can be identified within other forms of live sampling. to some degree recognizable with all sorts of live processing as an instrumental activity. For the live processing performer: a detached-ness of control as opposed to directly playing each event.

Contrived and artificial mappings. I asked whether the analyzer-modulation mappings are perhaps too contrived, too “made up”? Marije replying that everything we do with electronic music instrument design (and mapping) is to some degree made up. It is always artibrary, design decisions, something made up. There is not one “real” way, no physical necessity or limitation that determines what the “correct” mapping is. As such, there are only mappings that emphasize different aspects of performance and interaction, new ideas that might seem “contrived” can contain yet-to-be-seen areas of such emphasis. Composition is in these connections. For longer pieces one might want variation in mapping. For example, in the combined instrument created by voice and drums in some of our research sessions. Depending on combination and how it is played, the mapping might wear out over time, so one might want to change it during one musical piece.

Limitation. In January I did a presentation at UC Irvine, for an audience well trained in live processing and electronic music performance. One of the perspectives mentioned there was that the cross-adaptive mapping could also be viewed as a limitation. One could claim that all of these modulations that we can perform cross-adaptively could have been manually controlled, an with much more musical freedom if manually controlled. Still, the crossadaptive situation provides another kind of dynamic. The acoustic instrument is immediate and multidimensional, providing a nuanced and intuitive interface. We can tap into that. As an example as to how the interfacne changes the expression, look at how we (Marije) use accelerometers over 3 axes of motion: one could produce the same exact same control signals using 3 separate faders, but the agency of control, the feeling, the expressivity, the dynamic is different with accelerometers that it is with faders. It is different to play, and this will produce different results. The limitations (of an interface or a mapping) can be viewed as something interesting, just as much as something that inhibits.

Analyzer noise and flakyness

One thing that have concerned me lately is the fact that the analyzer is sometimes too sensitive to minor variations in the signal. Mathematical differences sometimes occur on a different scale than the expressive differences. One example is the rhythm analyzer, the way I think it is too noisy and unreliable, seen in the light of the practical use in session, where the musicians found it very appropriate and controllable.
Marije reminds me that in the live performance setting, small accidents and surprises are inspiring. In a production setting perhaps not so much. Musicians are trained to embrace the imperfections of, and characteristic traits of their instument, so it is natural for them to also respond in a similar manner to imperfections in the adaptive and crossadaptive control methods. This makes me reflect if there is a research methodology of accidents(?), on how to understand the art of the accident, understand the failure of the algorithm, like in glitch, circuit bending, and other art forms relating to distilling and refining “the unwanted”.

Rhythm analysis

I will refine the rhythm analysis, it seems promising as a measure
of musical expressivity. I have some ideas of maintaining several parallel hypotheses on how to interpret input, based on previous rhythm research. some of this comes from “Machine Musicianship” by Robert Rowe, some from readin a UCSD dissertation by Michelle L. Daniels: “An Ensemble Framework for Real-Time Beat Tracking”. I am currently trying to distill these into a simplest possible method of rhythm analysis for our purposes. So I ask Marije on ideas on how to refine the rhythm analyzer. Rhythm can be one parameters that outlines “a situation” just as much as it creates a “snapshot” (recall the discussion of agency and direct/indirect control, above). One thing we may want to extract is slower shifts, from one situation to another. My concerns that it takes too long to analyze a pattern (well, at least as long as the pattern itself, which might be several seconds) can then be regarded less of a concern, since we are not primarily looking for immediate output. Still, I will attempt to minimize the latency of rhythm analysis, so that any delay in response is due to aestethic choice, and not so much limited by the technology. She also mentions the other Nick Collins. I realize that he’s the one behind the bbcut algorithm also found in Csound. I’ve used a lot a long time ago. Collins has written a library for feature extraction within SuperCollider. To some degree there is overlap with feature extraction on our Analyzer plugin. Collins invokes external programs to produce similarity matrices, something that might be useful for our purposes as well, as a means of finding temporal patterns in the input. In terms of rhythm analysis, it is based on beat tracking as is common. While we in our rhythm analysis attempts at *not relying* on beat tracking, we could still perhaps implement it, if nothing else so to use it as a measure of beat tracking confidence (assuming this as a very coarse distinction between beat based and more temporally free music.
Another perspective on rhythm analysis can also perhaps be gained from Clarence Barlow’s interest in ratios. The ratio book is available online, as is a lot of his other writings.  Barlow states “In the case of ametric music, all pulses are equally probable”… which leads me to think that any sort of statistical analysis, frequency of occurence of observed inter-onset times, will start to give indications of “what this is”… to lift it slowly out of the white-noise mud of equal probabilities.

Barlow uses the “Indispensability formula“, for relating the importance of each subdivision within a given meter. Perhaps we could invert this somehow to give a general measure of “subdivided-ness“?. We’re not really interested in finding the meter, but the patterns of subdivision is nonetheless of interest. He also use the “Indigestibility formula” for ratios, based on prime-ness, suggests also a cultural digestability limit around 10 (10:11, 11:12, 12:13 …). I’ve been pondering different ways of ordering the complexity of different integer ratios, such as different trhythmic subdivisions. The indigesibility formula might be one way to approach it, but reading further in the ratio book, the writing of Demetrios E. Lekkas leads me to think of another way to sort the subdivisions into increasing complexity:

Lekkas describes the traditional manner of writing down all rational numbers by starting with 1/1 (p 38), then increasing the numerator by one, then going through all denominators from 1 up to the nominator, skipping fracions that can be simplified since they represent numbers earlier represented. This ordering does not imply any relation to complexity of the ratios produced. If tried to use it as such, one problem with this ordering is that it determines that subdividing in 3 is less complex than subdividing in 4. Intuitively, I’d say a rhythmic subdivision in 3 is more complex than a further subdivision of the first established subdivision in 2. Now, could we, to try to find a measure of complexity, assume that divisions further apart from any previous established subdivision are simpler than the ones creating closely spaced divisions(?). So, when dividing 1/1 in 2, we get a value at 0.5 (in addition to 0.0 and 1.0, which we omit for brevity). Then, trying to decide what is the next further division is the most complex, we try out all possible further subdivision up to some limit, look at the resulting values and their distances to already excisting values.
Dividing in 3 give 0.33 and 0.66 (approx), while dividing in 4 give the (new) values 0.25 and 0.75. Dividing by 5 gives new values at .2 and .4, by 6 is unnecessary as it does not produce any larger distances than already covered by 3. Divide by 7 gives values at .142, 0.285 and .428. Divide by 8 is unnecessary as it does not produce any values of larger distance than the divide by 4.
The lowest distance introduced by dividing in 3 is 0.33 to 0.5, a distance of approx 0.17. The lowest distance introduced by dividing in 4 is from 0.25 to 0.5, a distance of 0.25. Dividing into 4 is thus less complex. Checking the divide by 5 and 7 can be left as an exercise to the reader.
Then we go on to the next subdivision, as we now have a grid of 1/2 plus 1/4, with values at 0.25, 0.5 and 0.75. The next two alternatives (in increasing numeric order) is division by 3 or division by 5. Division by 3 gives a smallest distance (to our current grid) from 0.25 to 0.33 = 0.08. Division by 5 gives a smallest distance from 0.2 to 0.25 = 0.05. We conclude that division by 3 is less complex. But wait, let’s check division by 8 too while we’re at it also here, leaving divide by 6 and 7 as an exercise to the reader). Division by 8, in relation to our current grid (.25, .5, .75) gives a smallest distance of 0.125. This is larger than the smallest distance produced by division in 3 (0.08), so we choose 8 as our next number in increasing order of complexity.
Following up on this method, using a highest subdivision of 8, eventually gives us this order 2,4,8,3,6,5,7 as subdivisions in increasing order of complexity. This coincides with my intuition of rhythmic complexity, and can be reached by the simple procedure outlined above. We could also use the same procedure to determine the exact value of complexity for each of these subdivisions, as a means to create an output “value of complexity” for integer ratios. As a side note to myself, check how this will differ from using Tenney height or Benedetti height as I’ve used earlier in the Analyzer.

On the justification for coming up with this procedure I might lean lightly on Lekkas again: “If you want to compare them you just have to come up with your own intuitive formula…deciding which one is more important…That would not be mathematical. Mind you, it’s non-mathematical, but not wrong.” (Ratio book p 40)
Much of the book relates to ratios as in pitch ratios and tuning. Even though we can view pitch and rhythm as activity within the same scale, as vibrations/activations at different frequencies, the perception of pitch is further complicated by the anatomy of our inner ear (critical bands), and by cultural aspects and habituation. Assumedly, these additional considerations should not be used to infer complexity of rhythmic activity. We can not directly use harmonicity of pitch as a measure of the harmonicity of rhythm, even though it might *to some extent* hold true (and I have used this measure up until now in the Analyzer).

Further writings by Barlow on this subject can also be found in his On Musiquantics. “Can the simplicity of a ratio be expressed quantitatively?” (s 38), related to the indegestability formula. See also how “metric field strength”  (p 44), relates to the indispensability formula. The section from p 38-47 concerns this issue, as well as his “Formulæ for Harmonicity” p 24, (part II), with Interval Size, Ratios and Harmonic Intensity on the following pages. For pitch, the critical bandwidth (p 48) is relevant but we could discuss if not the “larger distance created by a subdivision” as I outlined above is more appropriate for rhythmic ratios.

Instrumentality

The 3Dmin book “Musical Instruments in the 21st Century” explores various notions of what an instrument can be, for example the instrument as a possibility space. Lopes/Hoelzl/de Campo, in their many-fest “favour variety and surprise over logical continuation” and “enjoy the moment we lose control and gain influence”. We can relate this to our recent reflections on how performers in our project thrive in a setting where the analysis meethods are somewhat noisy and chaotic. The essence being they can control the general trend of modulation, but still be surprised and disturbed” by the immediate details. Here we again encounter methods of the “less controllable”: circuit bending, glitch, autopoietic (self-modulating) instruments, meta-control techniques (de Campo), and similarly the XY interface for our own Hadron synthesizer, to mention a few apparent directions. The 3DMIN book also has a chapter by Daphna Naphtali on using live processing as an instrument. She identifies some potential problems about the invisible instrument. One problem, according to Naptali, is that it can be difficult to identify the contribution (of the performer operating it). One could argue that invisibility is not necessarily a problem(?), but indeed it (invisibility and the intangible) is a characteristic trait of the kind of instruments that we are dealing with, be it for live processing as controlled by an electronnic musician, or for crossadaptive processing as controlled by the acoustic musicians.

Marije also has a chapter in this book, on the blurring boundaries between composition, instrument design, and improvisation. …”the algorithm for the translation of sensor data into music control data is a major artistic
area; the definition of these relationships is part of the composition of a piece” Waisvisz 1999, cited by Marije

Using adaptive effects as a learning strategy

In light of the complexity of crossadaptive effects, the simpler adaptive effects could be used as a means of familiarization for both performers and “mapping designers” alike. Getting to know how the analyzer reacts to different source material, and how to map the signals in a musically effective manner. The adaptive use case is also more easily adaptable to a mixing situation, for composed music, and any other kind of repeatable situation. The analyzer methods can be calibrated and tuned more easily for each specific source instrument. Perhaps we could also look at a possible methodology for familiarization, how do we most efficiently learn to know these feature-to-modulator mappings. Revising the literature on adaptive audio effects (Verfaille etc) in the light of our current status and reflections might be a good idea.

Performers utilizing adaptive control

Similarly, it might be a good idea to get on touch with environments and performers using adaptive techniques as part of their setup. Marije reminded me that Jos Zwaanenburg and his students at the Conservatorium of Amsterdam might have more examples of musicians using adaptive control techniques. I met Jos some years ago, and contacted him again via email now. Hans Leeouw is another Dutch performer working with adaptive control techniques.  His 2009 NIME article mentions no adaptive control, but has a beautiful statement on the design of mappings: “…when the connection between controller and sound is too obvious the experience of ‘hearing what you see’ easily becomes ‘cheesy’ and ‘shallow’. One of the beauties of acoustic music is hearing and seeing the mastery of a skilled instrumentalist in controlling an instrument that has inherent chaotic behaviour “. In the 2012 NIME article he mentions audio analyses for control. I Contacted Hans to get more details and updated information about what he is using. Via email he tells that he use noise/sinusoidal balance as a control both for signal routing (trumpet sound routed to different filters), and also to reconfigure the mapping of his controllers (as appropriate for the different filter configuration). He mentions that the analyzed transition from noise to sinusoidal can be sharp, and that additional filtering is needed to geet a smooth transition. A particularly interesting area occurs when the routing and mapping is in this intermediate area, where both modes of processing and mapping are partly in effect.

As an example of on researcher/performer that has explored voice control, Marije mentioned Dan Stowell.
Nor surprisingly, he’s also done his research in the context of QMUL. Browsing his thesis, I note some useful terms for ranking extracted features, as he writes about *perceptual relevance*, *robustness*, and *independence*. His experiments on ranking the different features are not conclusive, as “none of the experiments in themselves will suggest a specific compact feature set”. This indication coincides with our own experience so far as well, that different instruments and different applications require different subsets of features. He does however mention spectral centroid, to be particularly useful. We have initially not used this so much due to a high degree of temporal fluctuation. Similarly, he mentions spectral spread, where we have so far used more spectral flatness and spectral flux. This also reminds me of recent discussions on the Csound list regarding different implementations of the analysis of spectral flux (difference from frame to frame or normalized inverse correlation), it might be a good idea to test the different implementations to see if we can have several variations on this measure, since we have found it useful in some but not all of our application areas. Stowell also mentions log attack time, which we should revisit and see how we can apply or reformulate to fit our use cases. A measure that we haven’t considered so far is delta MFCCs, the temporal variation within each cepstral band. Intuitively it seems to me this couldd be an alternative to spectral flux, even though Stowell have found it not to have a significant mutual information bit (delta MFCC to spectral flux). In fact the Delta MFCCs have little MI with any other features whatsoever, although this could be related to implementation detail (decorrelation). He also finds that Delta MFCC have low robustness, but we should try implementing it and see what it give us. Finally, he also mentions *clarity* as a spectral measure, in connectino to pitch analysis, defined as “the normalised strength of the second peak of the autocorrelation trace [McLeod and Wyvill, 2005]”. It is deemed a quite robust measure, and we could most probably implement this with ease and test it.

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/03/20/conversation-with-marije-march-2017/feed/ 0 760
Seminar: Mixing and timbral character http://crossadaptive.hf.ntnu.no/index.php/2017/03/02/seminar-mixing-and-timbral-character/ http://crossadaptive.hf.ntnu.no/index.php/2017/03/02/seminar-mixing-and-timbral-character/#respond Thu, 02 Mar 2017 19:28:02 +0000 http://crossadaptive.hf.ntnu.no/?p=738 Continue reading "Seminar: Mixing and timbral character"]]> Online conversation with Gary Bromham (London), Bernt Isak Wærstad (Oslo), Øyvind Brandtsegg (San Diego), Trond Engum and Andreas Bergsland (Trondheim). Gyrid N. Kaldestad, Oslo, was also invited but unable to participate.

The meeting revolves around the issues “mixing and timbral character” as related to the crossadaptive project. As there are many aspects of the project that touches upon these issues, we have kept the agenda quite open as of yet, but asking each participant to bring one problem/question/issue.

Mixing, masking

In Oslo they worked with the analysis parameters spectral crest and flux, aiming to use these to create a spectral “ducking” effect, where the actions of one instrument could selectively affect separate frequency bands of the other instrument. Gary is also interested in these kinds of techniques for mixing, to work with masking (allowing and/or avoiding masking). One could think if it as a multiband sidechaining with dynamic bands, like a de-esser, but adaptive to whichever frequency band currently needs modification. These techniques are related both to previous work on adaptive mixing (for example at QMUL) and also partially solved by recent commecial plugins, like Izotope Neutron.
However interesting these techniques are, the main focus of our current project is more on the performative application of adaptive and crossadaptive effects. That said, it could be fruitful using these techniques, not to solve old problems, but to find new working methods in the studio as well. In the scope of the project, this kind of creative studio work can be aimed at familiarizing ourselves with the crossadaptive methods in a controlled and repeatable setting. Bernt also brought up the issue of recording the analysis signals, using them perhaps as source material for creative automation, editing the recorded automation as one might see fit. This could be an effective way of familiarization with the analyzer output as well, as it invites taking a closer look at the details of the output of the different analysis methods. Recording the automation data is straightforward in any DAW, since the analyzer output comes into the DAW as external MIDI or OSC data. The project does not need to develop any custom tools to allow recording and editing of these signals, but it might be a very useful path of exploration in terms of working methods. I’d say yes please, go for it.

Working with composed material, post production

Trond had recently done a crossadaptive session with classical musicians, playing composed material. It seems that this, even though done “live” has much in common with applying crossadaptive techniques in post production or in mixing. This is because the interactive element is much less apparent. The composition is a set piece, so any changes to the instrumental timbre will not change what is played, but rather can influence the nuances of interpretation. Thus, it is much more a one-way process instead of a dialectic between material and performance. Experts on interpretation of composed music will perhaps cringe at this description, saying there is indeed a dialogue between interpretation and composition. While this is true, the degree to which the performed events can be changed is lesser within a set composition. In recent sessions, Trond felt that the adaptive effects would exist in a paralell world, outside of the composition’s aesthetic, something unrelated added on top. The same can be said about using adaptive and crossadaptive techniques in a mixing stage of a production, where all tracks are previously recorded and thus in a sense can be regarded as a set (non-changeable) source. With regards to applying analysis and modulation to recorded material, one could also mention that the Oslo sessions used recordings of the (instruments in the session) to explore the analysis dimensions. This was done as an initial exploratory phase of the session. The aim was finding features that already exist in the performer’s output, rather than imposing new dimensions of expression that the performer will need to adapt to.

On repeatability and pushing the system

The analysis-modulator response to an acoustic input is not always explicitly controllable. This is due to the nature of some of the analysis methods, technical weaknesses that introduce “flicker” or noise in the analyzer output. Even though these deviations are not inherently random, they are complex and sometimes chaotic. In spite of these technical weaknesses, we notice that our performers often will thrive. Musicians will often “go with the flow” and create on the spot, the interplay being energized by small surprises and tensions, both in the material and in the interactions. This will sometimes allow the use of analysis dimensions/methods that have spurious noise/flicker, still resulting in a consistent and coherent musical output, due to the performer’s experience in responding to a rich environment of sometimes contradicting signals. This touches one of the core aspects of our project, intervention into the traditional modes of interplay and musical communication. It also touches upon the transparency of the technology, how much should the performer be aware of the details of the signal chain? Sometimes rationalization makes us play safe. A fruitful scenario would be aiming for analysis-modulator mappings that create tension, something that intentionally disturbs and refreshes. The current status of our research leaves us with a seemingly unlimited amount of combinations and mappings, a rich field of possibilities, yet to be charted. The options are still so many that any attempt at conclusions about how it works or how to use it seems futile. Exploration in many directions is needed. This is not aimless exploration, but rather searching without knowing what can be found.

Listening, observing

Andreas mentions is is hard to pinpoint single issues in this rich field. As observer it can be hard to decode what is happening in the live setting. During sessions, it is sometimes a complex task following the exact details of the analysis and modulation. Then, when listening to the recorded tracks again later, it is easier to appreciate the musicality of the output. Perhaps not all details of the signal chain are cleanly defined and stringent in all aspects, but the resulting human interaction creates a lively musical output. As with other kinds of music making, it is easy to get caught up in detail at time of creation. Trying to listen more in a holistic manner, taking in the combined result, is a skill not to be forgotten also in our explorations.

Adaptive vs cross-adaptive

One way of working towards a better understanding of the signal interactions involved in our analyzer-modulator system is to do adaptive modulation rather than cross-adaptive. This brings a much more immediate mode of control to the performer, exploring how the extracted features can be utilized to change his or her own sound. It seems several of us have been eager to explore these techniques, but putting it off since it did not align with the primary stated goals of crossadaptivity and interaction. Now, looking at the complexity of the full crossadaptive situation, it is fair to say that exploration of adaptive techniques can serve as a very valid manner of getting in touch with the musical potential of feature-based modulation of any signal. In it’s own right, it can also be a powerful method of sonic control for a single performer, as an alternative to a large array of physical controllers (pedals, faders, switches). As mentioned earlier in this session, working with composed material or set mixes can be a challenge to the crossadaptive methods. Exploring adaptive techniques might be more fruitful in those settings. Working with adaptive effects also brings the attention to other possibilities of control for a single musician over his or her own sound. Some of the recent explorations of convolution with Jordan Morton shows the use of voice controlled crossadaptivity as applied to a musician’s own sound. In this case, the dual instrument of voice and bass operated by a single performer allows similar interactions between instruments, but bypassing the interaction between different people, thus simplifying the equation somewhat. This also brings our attention to using voice as a modulator for effects for instrumentalists not using voice as part of their primary musical output. Although this has been explored by several others (e.g. Jordi Janner, Stefano Fasciani, and also the recent Madrona Labs “Virta” synth) it is a valid and interesting aspect, integral to our project.

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/03/02/seminar-mixing-and-timbral-character/feed/ 0 738