Live convolution with Kjell Nordeson

Session at UCSD March 14.

Kjell Nordeson: Drums
Øyvind Brandtsegg: Vocals, Convolver.

Contact mikes

In this session, we explore the use of convolution with contact mikes on the drums to reduce feedback and cross-bleed. There is still some bleed from drums into the vocal mike, and there is some feedback potential caused by the (miked) drumheads resonating with the sound coming from the speaker. We have earlier used professional contact mikes, but found that our particular type did have a particularly low output, so this time we tried simple and cheap piezo elements from Radio shack, directly connected to high impedance inputs on the RME soundcard. This seems to give very high sensitivity and a fair signal to noise ratio. The frequency response is quite narrow and “characteristic” to put it mildly, but for our purposes, it can work quite well. Also, the high frequency loss associated with convolution is less of an issue when the microphones have such an abundance of high frequencies (and little or no low end).

IR update triggering

We have added the option of using a (midi) pedal to trigger IR recording. This allows a more deliberate performative control of the IR update. This was  first used by Kjell, while Øyvind was playing through Kjell’s IR. Later switched roles. Kjell notes that the progression from IR to IR works well, and that we definitely have some interesting interaction potential here. The merging of the sound from the two instruments creates a “tail” of what has been played, and that we continue to respond to that for a while.
When Kjell recorded the IR, he thought it was an extra distraction to have to also need to focus on what to record , and to operate the pedal accordingly. The mental distraction probably is not so much in the actual operation of the pedal, but in the reflection over what would make a good sound to record . It is not yet easy to foresee (hear) what comes out of the convolution process, so understanding how a particular input will work as an IR is a sort of remote and second-degree guesswork. This is of course also further complicated by not knowing what the other performer will play through the recorded IR. This will obviously become better with more experience using the techniques.
When we switched roles (vocal recording the IR), the acoustic/technical situation became a bit more difficult. The contact mikes, would pick up enough sound from the speakers (also through freely resonating cymbals resting on the drums, and via non-damped drum heads) to create feedback problems. This also creates extra repetitions of the temporal pattern of the IR due to the feedback potential. It was harder to get the sound completely dry and distinct, so the available timbral dynamic was more in the range from “mushy” to “more mushy” (…). Still, Kjell felt this was “more like playing together with another musician”. The feeling of playing through the IR is indeed the more performatively responsive situation, overpowered by the reduction in clarity that was caused by the technical/acustical difficulties. Similarly, Øyvind thought it was harder because the vocals only manifest themself as the ever changing IR, and the switching if the IR does not necessarily come across as a real/quick/responsive musical interaction. Also, delivering some material for the IR makes the quality of the material and the exact excerpt much more important. It is like giving away some part of what you’ve played, and it must be capable of being transformed out of your own control, so the material might become more transparent to it’s weaknesses. One can’t hide any flaws by stringing the material together in a well-flowing manner, rather the stringing-together is activated by the other musician. Easily, I can recognize this as the situation any musician being live sampled or live processed must feel, so it is a “payback time” revelation for me, having been in the role of processing others for many years.

Automatic IR update

We also tried automatic/periodic IR updates , as that would take the distraction of selecting IR material away, and we could more easily just focus on performing. The automatic updates shows their own set of weaknesses when compared with the manually triggered ones. The automatic update procedure essentually creates a random latency for the temporal patterns created by the convolution. This is because the periodic update is not in any way synchronized to the playing, and the performers do not have a feedback (visually or auditively) on the update pulse. This means the the IR update might happen offbeat or in the middle of a phrase. Kjell suggested further randomizing it as one solution. To this, Øyvind responds that it is already essentially random since the segmentation of input and the implied pulse of the material is unrelated, so it will shift in an unpredictable and always changing manner. Then again, following up on Kjells suggestion and randomizing it further could create a whole other, more statistical approach .  Kjell also remarks that this way of playing it feels more like “an effect”, something added, that does not respond as interactively . It just creates a (echo pattern) tail out of whatever is currently played. He suggested updating the IR at a much lower rate, perhaps once every 10 seconds. We tried a take with this setting too.

Switching who has the trigger pedal

Then, since the automatic updates seems not to work too well, and the mental distracion of selecting IR material seems unwanted, we figured, maybe the musician playing through the IR should be the one triggering the IR recording . This is similar (but exactly opposite) to the previous attempts at manual IR record triggering. Here, the musician playing through the IR is the one deciding the time of IR recording, and as such has some influence over the IR content. Still he can not decide what the other musician is playing at the time of recording , but this type of role distribution could create yet another kind of dynamic in the interplay. Sadly the session was interrupted by practical matters at this point, so the work must continue on a later occation.

Audio


Take1: Percussion IR, vocal playing through the IR. Recording/update of IR done by manual trigger pedal controlled by the percussionist. Thus it is possible to emphasize precise temporal patterns. The recording is done only with contact mikes on the drums, so there is some “disconnectedness” to the acoustic sound.


Take2: Vocal IR, percussion playing through the IR. Recording/update of the IR done by manual trigger pedal controlled by the singer. As in take 1, the drums sound going into the convolver  is only taken from the piezo pickups. Still, there is a better connectedness to the acoustic drum sound, due to an additional room mike being used (dry).


Take3: Percussion IR, automatic/periodic IR update. IR length is 3 seconds, IR update rate is 0.6 Hz.


Take4: Percussion IR, automatic/periodic IR update. IR length is 2.5 seconds, IR update rate is 0.2 Hz.

Other reflections

IR replacement is often experienced as a sudden musical change . There is no artifacts caused by the technical operation of updating the IR, but the musical result is more often a total change of “room characteristic”. Maybe we should try finding methods of slowly crossfading when updating the IR, keeping some aspects of the old one in a transitory phase. There is also a lot to be gained performatively, by the musician updating the IR having these transitions in mind. Choosing what to play and what to record is an effective way of controlling if the transitions should be sudden or slow.

Session with classical percussion students at NTNU, February 20, 2017

Introduction:

This session was a first attempt in trying out cross-adaptive processing with pre-composed material. Two percussionists, Even Hembre and Arne Kristian Sundby, students at the classical section, were invited to perform a composition written for two tambourines. The musicians had already performed this piece earlier in rehearsals and concerts. As a preparation for the session the musicians were asked to do a sound recording of the composition in order to prepare analysis methods and choice of effects before the session. A performance of the piece in its original form can be seen in this video – “Conversation for two tambourines” by Bobby Lopez performed by Even Hembre and Arne Kristian Sundby (recorded by Even Hembre).

Preparation:

Since both performers had limited experience with live electronics in general we decided to introduce the cross adaptive system gradually during the session. The session started with headphone listening, followed by introducing different sound effects while giving visual feedback to the musicians, and then performing with adaptive processing before finally introducing cross-adaptive processing. As a starting point, we used analysis methods which had already proved effective and intuitive in earlier sessions (RMS, transient density and rhythmical consonance). These methods also made it easier to communicate and discuss the technical process with the musicians during the session. The system was set up to control time based effects such as delays and reverbs, but also typical insert effects like filters and overdrive. The effect control contained both dynamical changes of different effect parameters, but also sample/hold function through the MIDIator. We had also brought a foot pedal so the performers could change the effects on the different parts of the composition during the performance.

Session:

After we had prepared and set up the system we discovered severe latency on the outputs of the system. Input signals seemed to function properly, but what was causing the latency of the output was not discovered. To solve the problem, we made a fresh set-up using the same mentioned analysing methods and effects, and after checking that the latency was gone, the session proceeded. We started with a performance of the composition without any effects, but with the performers using headphones to get familiar with the situation. The direct sound of each tambourine was panned hard left/right in the monitoring system to easier identify the two performers. After an initial discussion it was decided that both tambourines should be located in the same room since the visual communication between the performers was important in this particular piece. The microphones were separated with an acoustic barrier/screen and microphones set to cardio characteristic in order to avoid as much bleeding between the two as possible. During the performance the MIDIator was adjusted to the incoming signals. It became clear that there were some issues with bleeding already at this stage affecting the analyser, but we nevertheless retained the set-up to maintain the focus on the performance aspect. The composition had large variations in dynamics, and also in movement of the instruments. This was seen as a challenge considering the microphones’ static placements and the consequently large differences in input signal. Because of the movement, just small distance variations between instrument and microphone would have great impact in how the analysis methods read the signals. During the set-up, the visual feedback from the screen to the performers was a very welcome contribution regarding the understanding of the set-up. While setting up the MIDIator to control the effects we tried playing through the composition again trying out different effects. Adding effects made a big impact to the performance. It became clear that the performers tried to “block out” the effects while playing in order to not loose track of how the piece was composed. In this case the effects almost created a filter between the performers and the composition resulting in a gap between what they expected and what they got. This could of course be a consequence of the effects that was chosen, but the situation demanded another angle to narrow everything down in order to create a better understanding and connection between the performance and the technology. Since the composition consisted of different parts we made a selection of one of the quieter parts where the musicians could see how their playing affected their analysers, and how this further could be mapped to different effects using the MIDIator. There was still a large amount of overlapping between the instruments into the analyser because of bleeding, so we needed to take a break and rearrange the physical set-up in the room to further clarify the connection between musical input, analyser, MIDIator and effects. Avoiding the microphone bleeding helped both the system and the musicians to clarify how the input reacted to the different effects. Since the performers were interested in how this changed the sound of their instruments we agreed to abandon the composition, and instead testing out different set-ups, both adaptive and crossadaptive.

Sound examples:

1. Trying different effects on tambourine, processing musician controlling all parameters. Tambourine 1 (Even) is convolved with a recording of water and a cymbal. Tambourine 2 (Arne Kristian) is processed with delay, convolved with a recording of small metal parts and a pitch delay.

2. Tambourine 1 (Even) is analysed using transient density. The transient density is controlling a delay plug in on tambourine 2 (Arne Kristian)

3. Tambourine 2 (Arne Kristian) is analysed by transient density controlling a send from tambourine 1 convolved with cymbal. The higher transient density the less send.

4. Keeping the mapping settings from example 2 and 3 but adding rhythmical consonance analyses on Tambourine 2 to control another send level from tambourine 1 convolving it with recording of water. The higher consonance the more send. The transient density analysis on tambourine 1 is in addition mapped to control a send from tambourine 2 convolving it with metal parts. The higher density, the more send.

Observations:

Even though we worked with a composed piece it would be a good idea to have a “rehearsal” with the performers beforehand focusing on different directions through processing. This could open up for thoughts around how to do a new and meaningful interpretation of the same composition with the new elements.

It was a good idea to record the piece beforehand in order to construct the processing system, but this recording did not have any separation between the instruments either. This resulted in preparing and constructing a system that in theory were unable to be cross adaptive since it both analysed and processed the sum of both instruments leaving much less control to the individual musicians. This aspect, also concerning bleeding between microphones in more controlled environments, challenges a concept of fully controlling a cross adaptive performance. This challenge will probably be further magnified in a concert situation preforming through speakers. The musicians also noted that the separation between microphones was crucial for the understanding of the process, and the possibility to get a feeling of control.

In retrospect, the time-based effects prepared for this session could also be changed since several of them often worked against the intention of the composition, especially the most rhythmical parts. Even noted that: “Sometimes it’s like trying to speak with headphones that play your voice right after you have said the word, and that unable you to continue”.

This particular piece could probably benefit from more subtle changes from the processing. The sum of this made the interaction aspect between the performers and the technology more reduced. This became clearer when we abandoned the composition and concentrated on interaction in a more “free” setting. One way of going further into this particular composition could be to take a mixed music approach, and “recompose” and interpret it again with the processing element as a more included part of the composition process.

In the following and final part of the session, the musicians were allowed to freely improvise while being connected to the processing system. This was experienced as much more fruitful by both performers. The analysis algorithms focusing on rhythmical aspects, namely transient density and rhythmical consonance, were both experienced as meaningful and connected to the performers’ playing. These control parameters were mapped to effects like convolution and delay (cf. explanation of sound examples 1-4). The performers focused on issues of control, the differences between “normal” and inverse mapping, headphones monitoring and microphone bleeding when discussing their experiences of the session (see the video digest below for some highlights).

Video digest from session February 20, 2017

Convolution experiments with Jordan Morton

Jordan Morton is a bassist and singer, she regularly performs using both instruments combined. This provides an opportunity to explore how the liveconvolver can work when both the IR and the live input are generated by the same musician. We did a session at UCSD on February 22nd. Here are some reflections and audio excerpts from that session.

General reflections

As compared with playing with live processing, Jordan felt it was more “up to her” to make sensible use of the convolution instrument. With live processing being controlled by another musician, there is also a creative input from another source. In general, electronic additions to the instrument can sometimes add unexpected but desirable aspects to the performance. With live convolution where she is providing both signals, there is a triple (or quadruple) challenge: She needs to decide what to play on the bass, what to sing, explore how those two signals work together when convolved, and finally make it all work as a combined musical statement. It appears this is all manageable, but she’s not getting much help from the outside. In some ways, working with convolution could be compared to looping and overdubs, except the convolution is not static. One can overlay phrases and segments by recording them as IR’s, while shaping their spectral and temporal contour with the triggering sound (the one being convolved with the IR).
Jordan felt it easier to play bass through the vocal IR than the other way around . She tend to lead with the bass when playing acoustic on bass + vocals. The vocals are more an additional timbre added to complete harmonies etc with the bass providing the ground. Maybe the instrument playing through the IR has the opportunity of more actively shaping the musical outcome, while the IR record source is more a “provider” of an environment for the other to actively explore?
In some ways it can seem easier to manage the roles roles (of IR provider and convolution source) as one person than splitting the incentive among two performers. The roles becomes more separated when they are split between different performers than when one person has both roles and then switches between them. When having both roles, it can be easier to explore the nuances of each role. Possible to test out musical incentives by doing this here and then this there , instead of relying on the other person to immediately understand (for example to *keep* the IR, or to *replace* it *now* ).

Technical issues

We explored transient triggered IR recording, but had a significant acoustic bleed from bass into the vocal microphone, which made clean transient trigging a bit difficult. A reliable transient triggered recording would be very convenient, as it would allow the performer to “just play”. We tried using manual triggering, controlled by Oeyvind. This works reliably but involves some guesswork as to what is intended to be recorded. As mentioned earlier (e.g. in the first Olso session ), we could wish for a foot pedal trigger or other controller directly operated by the performer. Hey it’s easy to do, let’s just add one for next time.
We also explored continuous IR updates based on a metronome trigger. This allows periodic IR updates, in a seemingly streaming fashion. Jordan asked for an indication of the metronomic tempo for the updates, which is perfectly reasonable and would be a good idea to do (although had not been implemented yet). One distinct difference noted when using periodic IR updates is that the IR is always replaced. Thus, it is not possible to “linger” on an IR and explore the character of some interesting part of it. One could simulate such exploration by continuously re-recording similar sounds, but it might be more fruitful to have the ability to “hold” the IR, preventing updates while exploring one particular IR. This hold trigger could reasonably also be placed on a footswitch or other accessible control for the performer.

Audio excerpts


Take 1: Vocal IR, recording triggered by transient detection.


Take 2: Vocal IR, manually triggered recording


Take 3: Vocal IR, periodic automatic trigger of IR recording.


Take 4: Vocal IR, periodic automatic trigger of IR recording (same setup as for take 3)


Take 5: Bass IR, transient triggered recording. Transient triggering worked much cleaner on the bass since there was less signal bleed from voice to bass than vice versa.

Docmarker tool

Docmarker

During our studio sessions and other practical research work sessions, we noted that we needed a tool to annotate documentation streams. The stream could be an audio file, a video or some line of timed events. Audio editors and DAWs have tools for dropping markers into a file, and there are also tools for annotating video. However, we wanted an easy way of recording timed comments from many users, allowing these to be tied to any sequence of events, wheter recorded as audio, video or in other form. We also wanted each user to be able to make comments without necessarily having access to the file “original”, and for several users to be able to make comments simultaneously. By allowing comments from several users to be merged, one can also use this to do several “passes”  of making comments, merging with one’s own previous comments.

Assumedly, one can use this for other kinds of timed comments too. Taking notes on one’s own audio mixes, making edit lists from long interviews, … even marking of student compositions…

The tool is a simple Python script, and the code can be found at https://github.com/Oeyvind/docmarker
Download, unzip, and run from the terminal with:
python doc_marker.py

Session UCSD 14. Februar 2017

Liveconvolver4_trig

Session objective

The session objective was to explore the live convolver , how it can affect our playing together and how it can be used. New convolver functionality for this session is the ability to trigger IR update via transient detection, as opposed to manual triggering or periodic metro-triggered updates. The transient triggering is intended to make the IR updating more intuitive and providing a closer interaction between the two performers. We also did some quick exploration of adaptive effects processing (not cross-adaptive, just auto-adaptive). The crossadaptive interactions can sometimes be complex. One way to familiarize ourselves with the analysis methods and the modulation mappings could be to allow musicians to explore how these are directly applied to his or her own instrument.

Kyle Motl: bass
Oeyvind Brandtsegg: convolver/singer/tech/camera

Live convolver

Several takes were done, experimenting with manual and transient triggered IR recording. Switching between the role of “recording/providing the impulse response” and of “playing through, or on, the resulting convolver”. Reflections on these two distinct performative roles were particularly friutful and to some degree surprising. Technically, the two sound sources of audio convolution are equal, it does not matter which way the convolution is done (one sound with the other, or vice versa). The output sound will be the same. However, our liveconvolver does treat the two signals slightly differently, since one is buffered and used as the IR, while the other signal is directly applied as input to the convolver. The buffering can be updated at any time, in such a fashion that no perceptible extra delay occurs due to that part of the process. Still, the update needs to be triggered somehow. Some of the difference in roles occur due to the need for (and complications of) the triggering mechanism, but perhaps the deepest difference occurs due to something else. There is a performative difference in the action of providing an impulse response for the other one to use, and the action of directly playing through the IR left by the other. Technically, the difference is minute, due to the streamlined and fast IR update. Perhaps also the sounding result will be perceptually indistinguishable for an outside listener. Still the feeling for the performer is different within those two roles.  We noted that one might naturally play different type of things, different kinds of musical gestures, in the two different roles. This inclination can be overcome by intentionally doing what would belong to the other role, but it seem the intuitive reaction to the role is different in each case.

Video: a brief glimpse into the session environment.


Take 1: IR recorded from vocals, with a combination of manual and transient triggering. The bass is convolved with the live vocal IR. No direct (dry) signals was recorded, only the convolver output. Later takes in the session also recorded the direct sound from each instrument, which makes it easier to identify the different contributions to the convolution. This take serves more as a starting point from where we continued working.


Take 2: Switched roles, so IR is now recorded from the bass, and the vocals are convolved with this live updated IR. The IR updates were triggered by transient detection of the bass signal.


Take 3: As  for take 2, the IR is recorded from bass. Changed bass mic to try to reduce feedback, adjusted transient triggering parameters so that IR recording would be more responsive

Video: Reflections on IR recording, on the roles of providing the IR as opposed to being convolved by it.

Kyle noticed that he would play different things when recording IR than when playing through an IR recorded by the vocals. Recording an IR, he would play more percussive impulses, and for playing through the IR he would explore the timbre with more sustained sounds. In part, this might be an effect of the transient triggering, as he would have to play a transient to start the recording. Because of this we also did one recording of manually triggered IR recording with Kyle intentionally exploring more sustained sounds as source for the IR recording. This seems to even out the difference (between recording IR and playing through it) somewhat, but there is still a performatively different feeling between the two modes.
When having the role of “IR recorder/provider”, one can be very active and continuously replace the IR, or leave it “as is” for a while, letting the other musician explore the potential in this current IR. Being more active and continuously replacing the IR allows for a closer musical interaction, responding quickly to each other. Still, the IR is segmented in time, so the “IR provider” can only leave bits and pieces for the other musician to use, while the other musician can directly project his sounds through the impulse responses left by the provider.


Take 4:IR is recorded from bass. Manual triggering of the IR recording (controlled by a button), to explore the use of more sustained impulse responses.

Video: Reflections on manually triggering the IR update, and on the specifics of transient triggered updates.


Take 5: Switching roles again, so that the IR is now provided by the vocals and the bass is convolved. Transient triggered IR updates, so every time a vocal utterance starts, the IR is updated. Towards the end of the take, the potential for faster interaction is briefly explored.

Video: Reflections on vocal IR recording and on the last take.

Convolution sound quality issues

The nature of convolution will sometimes create a muddy sounding audio output. The process will dampen high frequency content and emphasize lower frequencis. Areas of spectral overlap between two two signals will also be emphasized, and this can create a somewhat imbalanced output spectrum. As the temporal features of both sounds are also “smeared” by the other sound, this additionally contributes to the potential for a cloudy mush. It is welll known that brightening the input sounds prior to convolution can alleviate some of these problems. Further refinements have been done recently by Donahue, Erbe and Puckette in the ICMC paper “Extended Convolution Techniques for Cross-Synthesis” . Although some of the proposed techniques does not allow realtime processing, the broader ideas can most certainly be adapted. We will explore this further potential for refinement of our convolver technique.

As can be heard in the recordings from this session, there is also a significant feedback potential when using convolution in a live environment and the IR is sampled in the same room as it is directly applied. The recordings were made with both musicians listening to the convolver output over speakers in the room. If we had been using headphones, the feedback would not have been a problem, but we wanted to explore the feeling of playing with it in a real/live performance setting. Oeyvind would control simple hipass and lowpass filtering of the convolver output during performance, and thus had a rudimentary means of manually reducing feedback. Still, once unwanted resonances are captured by the convolution system, they will linger for a while in the system output. Nothing has been done to repair or reduce the feedback in these recordings, we keep it as a strong reminder that it is something that needs to be fixed in the performance setup. Possible solutions consist of exploring traditional feedback reduction techniques, but it could also be possible to do an automatic equalization based on the accumulated spectral content of the IR. This latter approach might also help output scaling and general spectral balance, since already prominent frequencies would have less poential to create strong resonances.

Adaptive processing

As a way to investigate and familiarize ourselves with the different analysis features and the modulation mappings of these signals, we tried to work on auto-adaptive processing. Here, features of the audio input affects effect processing of the same signal. The performer can then more closely interact with the effects and explore how different playing techniques are captured by the analysis methods.


Adaptive take 1: Delay effect with spectral shift. Short (constant) delay time, like a slapback delay or comb filter. Envelope crest controls the cutoff frequency of a lowpass filter inside the delay loop. Spectral flux controls the delay feedback amount. Transient density controls a frequency shifter on the delay line output.


Adaptive take 2: Reverb. Rms (amplitude) controls reverb size. Transient density controls the cutoff frequency of a highpass filter applied after the reverb, so that higher density playing will remove low frequencies from the reverb. Envelope crest controls a similarly applied lowpass filter, so that more dynamic playing will remove high frequencies from the reverb.


Adaptive take 3: Hadron. Granular processing where the effect has its own multidimensional mapping from input controls to effect parameters. The details of the mapping is more complex. The resulting effect is that we have 4 distinctly different effect processing settings, where the X and Y axis of a 2D control surface provides a weighted interpolation between these 4 settings. Transient density controls the X axis, and envelope crest controls the Y axis. A live excerpt of the controls surface is provided in the video below.

Video of the Hadron Particle Synthesizer control surface controlled by bass transient density and envelope crest.

Some comments on analysis methods

The simple analysis parameters, like rms amplitude and transient density works well on all (most) signals. However, other analysis dimensions (e.g. spectral flux , pitch , etc) have a more inconsistent relation between signal and analysis output when used on different types of signals. They will perform well on some instrument signals and less reliably on others. Many of the current analysis signals have been developed and tuned with a vocal signal, and many of them do not work so consistently for example on a bass signal. Due to this, the auto-adaptive control (as shown in this session) is sometimes a little bit “flaky”. The auto-adaptive experiments seems a good way to discover such irregularities and inconsistencies in the analyzer output. Still, we also have a dawning realization that musicians can thrive with some “livelyness” in the control output. Some surprises and quick turns of events can provide energy and creative input for a performer. We saw this also in the Trondheim session where rhythm analysis was explored , and in the discussion of this in the follow-up seminar . There, Oeyvind stated that the output of the rhythm analyzer was not completely reliable, but the musicians stated they were happy with the kind of control it gave, and that it felt intuitive to play with. Even though the analysis sometimes fail or misinterprets what is being played, the performing musician will react to whatever the system gives. This is perhaps even more interesting (for the musician), says Kyle. It creates some sort of tension, something not entirely predictable. This unpredictability is not the same as random noise. There is a difference between something truly random and something very complex (like one could say about an analysis system that misinterprets the input). The analyzer would react the same way to an identical signal, but give unproportionally large variance in the output due to small variances in the input. Thus it is a nonlinear complex response from the analyzer. In the technical sense it is controllable and predictable, but it is very hard to attain precise and determined control on a real world signal. The variations and spurious misinterpretations creates a resistance for the performer, something that creates energy and drive.