Crossadaptive session NTNU 12. December 2016

Participants:

Trond Engum (processing musician)

Tone Åse (vocals)

Carl Haakon Waadeland (drums and percussion)

Andreas Bergsland (video)

Thomas Henriksen (sound technician)

Video digest from session:

Session objective and focus:

The main focus in this session was to explore other analysing methods than used in earlier sessions (focus on rhythmic consonance for the drums, and spectral crest on the vocals). These analysing methods were chosen to get a wider understanding of their technical functionality, but also their possible use in musical interplay. In addition to this there was an intention to include the sample/hold function for the MIDIator plug-in. The session was also set up with a large screen in the live room to monitor the processing instrument to all participants at all times. The idea was to democratize the processing musician role during the session to open up for discussion and tuning of the system as a collective process based on a mutual understanding. This would hopefully communicate a better understanding of the functionality in the system, and how the musicians individually can navigate within it through their musical input. At the same time this also opens up for a closer dialog around choice of effects and parameter mapping during the process.

Earlier experiences and process

Following up on experiences documented through earlier sessions and previous blog posts, the session was prepared to avoid the most obvious shortcomings. First of all, separation between instruments to avoid bleeding through microphones was arranged by placing vocals and drums in separate rooms. Bleeding between microphones was earlier affecting both the analysed signals and effects. The system was prepared to be as flexible as possible beforehand containing several effects to map to, flexibility in this context meaning the possibility to do fast changes and tuning the system depending on the thoughts from the musicians. Since the group of musicians remained unchanged during the session this flexibility was also seen as a necessity to go into details and more subtle changes both in the MIDIator and the effects in play to reach common aesthetical intentions.

Due to technical problems in the studio (not connected with the cross adaptive set up or software) the session was delayed for several hours resulting in shorter time than originally planned. We therefore made a choice to concentrate only on rhythmic consonance (referred to as rhythmical regularity in the video) as analysing method for both drums and vocals. The method we used to familiarize with this analysing tool was that we started with drums trying out different playing techniques with both regular and irregular strokes while monitoring the visual feedback from the analyser plug-in without any effect. Regular strokes in this case resulting in high stable value, irregular strokes resulting in low value.

picture1

Figure 1. Consonance (regularity) visualized in the upper graph.

What became evident was that when the input stopped, the analyser stayed at the last measured value, and in that way could act as a sort of sample/hold function on the last value and in that sense stabilise a setting in an effect until an input was introduced again. Another aspect was that the analysing method worked well for regularity in rhythm, but had more unpredictable behaviour when introducing irregularity.

After learning the analyser behaviour this was further mapped to a delay plugging as an adaptive effect on the drums. The parameter controlled the time range of 14 delays resulting in larger delay time range the more regularity, and vice versa.

After fine-tuning the delay range we agreed that the connection between the analyser, MIDIator and choice of effect worked musically in the same direction. (This was changed later in the session when trying out cross-adaptive processing).

The same procedure was followed when trying vocals, but then concentrating the visual monitoring mostly on the last stage of the chain, the delay effect. This was experienced as more intuitive when all settings were mapped since the musician then could interact visually with the input during performance.

Cross-adaptive processing.

When starting the cross-adaptive recording everyone had followed the process, and tried out the chosen analysing method on own instruments. Even though the focus was mainly on the technical aspects the process had already given the musicians the possibility to rehearse and get familiar with the system.

The system we ended up with was set up in the following way:

Both drums and vocals was analysed by rhythmical consonance (regularity). The drums controlled the send volume to a convolution reverb and a pitch shifter on the vocals. The more regular drums the less of the effects, the less regular drums the more of the effects.

The vocals controlled the time range in the echo plugin on the drums. The more regular pulses from the vocal the less echo time range on the drums, the less regular pulses from the vocals the larger echo time range on the drums.

Sound example (improvisation with cross adaptive setup): 

 

Concerts and presentations, fall 2016

A number of concerts, presentations and workshops were given during October and November 2016. We could call it the 2016 Crossadaptive Transatlantic tour, but we won’t. This post gives a brief overview.

Concerts in Trondheim and Göteborg

BRAK/RUG was scheduled for a concert (with a preceding lecture/presentation) at Rockheim, Trondheim on 21. October. Unfortunately, our drummer Siv became ill and could not play. At 5 in the afternoon (concert start at 7) we called Ola Djupvik to ask if he could sit in with us. Ola has experience from playing in a musical setting with live processing and crossadaptive processing, for example the Session 20. – 21 September,  and also from performing with music technology students Ada Mathea Hoel, Øystein Marker and others. We were very happy and grateful for his courage to step in on such short notice. Here’s and excerpt from the presentation that night, showing vocal pitch controlling reverb on the drums (high pitch means smaller reverb size), transient density on the drums controlling delay feedback on the vocals (faster playing means less feedback).

There is a significant amount of crossbleed between vocals and drums, so the crossadaptivity is quite flaky. We still have some work to do on source separation to make this work well when playing live with a PA system.

Crossadaptive demo at Rockheim mix_maja_ola_cross_rockheim_ptmstr

 

Thanks to Tor Breivik for recording the Rockheim event. The clip here shows only the crossadaptive demonstration. The full concert is available on Soundcloud

brak_trio_rockheim1
Brandtsegg, Ratkje, Djupvik trio at Rockheim

The day after the Trondheim concert, we played at the Göteborg Art Sounds festival. Now, Siv was feeling better and was able to play. Very nice venue at Stora Teatern. This show was not recorded.

And then we take… the US

The crossadaptive project was presented  at the Transatalantic Forum in Chicago on October 24, in a special session titled “Sensational Design: Space, Media, and the Senses”. Both Sigurd Saue, Trond Engum and myself (Øyvind Brandtsegg) took part in the presentation, showing the many-faceted aspects of our work. Being a team of three people also helped the networking effort that is naturally a part of such a forum. During our stay in Chicago, we also visited the School of the Art Institute of Chicago, meeting Nicolas Collins, Shawn Decker, Lou Mallozzi, and Bob Snyder to start working on exchange programs for both students and faculty. Later in the week, Brandtsegg did a presentation of the crossadaptive project during a SAIC class on audio projects.

sigurd_and_bob
Sigurd Saue and Bob Snyder at SAIC

After Chicago, Engum and Saue went to Trondheim, while I traveled further on to San Francisco, Los Angeles, Santa Barabara, and then finally to San Diego.
In the Bay area, after jamming with Joel Davel in Paul Dresher’s studio, and playing a concert with Matt Ingalls and Ken Ueno at Tom’s Place, I presented the crossadaptive project at CCRMA, Stanford University on November 2.  The presentations seemed well received and spurred a long discussion where we touched on the use of MFCC’s, ratios and critical bands, stabilizing of the peaks of rhythmic autocorrelation, the difference of the correlation between two inputs (to get to the details of each signal), and more. Getting the opportunity to discuss audio analysis with this crowd was a treat.  I also got the opportunity to go back the day after to look at student projects, which I find gives a nice feel of the vibe of the institution. There is a video of the presentation here

After Stanford, I also did a presentation at the beautiful CNMAT at UC Berkeley, with Ed Campion, Rama Gottfried, a group of enthusiastic students. There I also met colleague P.A. Nilsson from Göteborg, as he was on a residency there. P.A.’s current focus on technology to intervene and structure improvisations is closely related to some of the implications of our project.

cnmat
CNMAT, UC Berkeley

On November 7 and 8, I did workshops at California Institute of the Arts, invited by Amy Knoles. In addition to presenting the technologies involved, we did practical studies where the students played in processed settings and experienced the musical potential and also the different considerations involved in this kind of performance.

calarts_workshop
Calarts workshops

Clint Dodson and Øyvind Brandtsegg experimenting together at CalArts

At UC Santa Barbara, I did a presentation in Studio Xenakis on November 9. There, I met with Curtis Roads, Andres Cabrera, and a broad range of their colleagues and students. With regards to the listening to crossadaptive performances, Curtis Roads made a precise observation that it is relatively easy to follow if one knows the mappings, but it could be hard to decode the mapping just by listening to the results. In Santa Barbara I also got to meet Owen Campbell, who did a master thesis on crossadaptive and got insight into his research and software solutions. His work on ADEPT was also presented at the AES workshop on intelligent music production at Queen Mary University this September, where Owen also met our student Iver Jordal, presenting his research on artificial intelligence in crossadaptive processing.

San Diego

Back in San Diego, I did a combined presentation and concert for the computer music forum on November 17.  I had the pleasure of playing together with Kyle Motl on double bass for this performance.

sd_kyle_and_oyvind
Kyle Motl and Øyvind Brandtsegg, UC San Diego

We demonstrated both live processing and crossadaptive processing between voice and bass.  There was a rich discussion with the audience. We touched on issues of learning (one by one parameter, or learning a combined and complex parameter set like one would do on an acoustic instrument), etudes, inverted mapping sometimes being more musically intuitive, how this can make a musician pay more attention to each other than to self (frustrating or liberating?), and tuning of the range and shape of parameter mappings (still seems to be a bit on/off sometimes, with relatively low resolution in the middle range).

First we did an example of a simple mapping:
Vocal amplitude reduce reverb size for bass,
Bass amplitude reduce delay feedback on vocals

Kyle and Oeyvind Simple mix_sd_nov_17_3_cross_ptmstr

 

Then a more complex example:
Vocal transient density -> Bass filter frequency  of a lowpass filter
Vocal pitch -> Bass delay filter frequency
Vocal percussive -> Bass delay feedback
Bass transient density -> Vocal reverb size (less)
Bass pitch+centroid -> Vocal tremolo speed
Bass noisiness -> Vocal tremolo grain size (less)

K&O complex mapping mix_sd_nov_17_4_cross_ptmstr

 

We also demonstrated another and more direct kind of crossadaptive processing, when doing convolution with live sampled impulse response. Oeyvind manually controlled the IR live sampling of sections from Kylse’s playing, and also triggered the convolver with tapping and scratching on a small wooden box with a piezo microphone. The wooden box source is not heard directly in the recording, but the resulting convolution is. No other processing is done, just the convolution process.

K&O, convolution mix_sd_nov_17_2b_conv_ptmstr

 

We also played a longer track of regular live processing this evening. This track is available on Soundcloud

Thanks to UCSD and recording engineers Kevin Dibella and James Forest Reid for recording the Nov 17 event.