Session with David Moss in Berlin

Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration with David Moss, as we have learned so much about performance, improvisation and music in general from him on earlier occations.
Earlier the same week I had presented the crossadaptive project for prof. De Campo’s students of computational art and performance with complex systems. This environment of arts and media studies at UdK was particularly conductive to our research, and we had some very interesting discussions.

David Moss – vocals
Øyvind Brandtsegg – crossadaptive processing, vocals
Alberto De Campo – observer
Marija Mickevica – observer, and vocals on one take

 

More details on these tracks will follow, currently I just upload them here so that the involved parties might get access.

DavidMoss_Take0
 DavidMoss_Take0

Initial exploration, the (becoming) classic reverb+delay crossadaptive situation

DavidMoss_Onefx1
 DavidMoss_Onefx1

Test session, exploring one effect only

DavidMoss_Onefx2
 DavidMoss_Onefx2

Test session, exploring one effect only  (2)

DavidMoss_Take1
 DavidMoss_Take1

First take

DavidMoss_Take2
 DavidMoss_Take2

Second take

DavidMoss_Take3
 DavidMoss_Take3

Third take

Then we did some explorations of David telling stories, live convolving with Øyvind’s impulse responses.

DavidMoss_Story0
 DavidMoss_Story0

Story 1

DavidMoss_Story1
 DavidMoss_Story1

Story 2

And we were lucky that student Marija Mickevica wanted to try recording live impulse responses while David was telling stories. Here’s an example:

DavidMoss_StoryMarija
 DavidMoss_StoryMarija

Story with Marija’s impulse responses

And a final take with David and Øyvind, where all previously tested effects and crossadaptive mappings were enabled. Selective mix of effects and modulations was controlled manually by Øyvind during the take.

DavidMoss_LastTake
 DavidMoss_LastTake

Final combined take

 

Convolution demo sounds

The convolution techniques we’ve developed in this project are aimed at live performance and are so far mostly used for that purpose. The manner in which the filters can be continuously updated might also allow for other creative applications within convolution and filtering. As a very basic demo of how the filters sound with prerecorded sounds, we’ve done two audio examples. Do note that the updating of the filters are still realtime, it is just the source sounds playing into the filters that are prerecorded.

Source

The source sounds are prepared as a stereo file, with sound A in the left channel and sound B in the right channel. Sound A is assembled from sounds downloaded from freesound.org , all made by user Corsica_S (files “alpha.wav”, “bravo.wav”, “charlie.wav”, “delta.wav”). Sound B is a drumloop programmed by Oeyvind Brandtsegg some time in the late 90’s.

conv_sources
 conv_sources

Source sounds for the convolution demo.

 

Liveconv

Sound A is recorded as the impulse response for liveconv. The length of the filter is approx 0.5 seconds (the duration of the spoken word used as source). It is replaced approx every 5.5 seconds, manually automated to line up with when the appropriate word appears in the source. This way, the first IR contains the word “alpha”, then it is replaced after 5.5 seconds with the word “bravo”, and so on until all 4 words have been used as an IR.

conv_liveconv
 conv_liveconv

Liveconv demo using above source sounds.

tvconv

Sound A and sound B runs continuously through the filter. No freezing of coefficients is applied. The filter length is 23768 samples, approximately 0.74 seconds at a sample rate of 44.1 kHz.

conv_tvconv
 conv_tvconv

tvconv demo using the above source sounds.