Session with classical percussion students at NTNU, February 20, 2017


This session was a first attempt in trying out cross-adaptive processing with pre-composed material. Two percussionists, Even Hembre and Arne Kristian Sundby, students at the classical section, were invited to perform a composition written for two tambourines. The musicians had already performed this piece earlier in rehearsals and concerts. As a preparation for the session the musicians were asked to do a sound recording of the composition in order to prepare analysis methods and choice of effects before the session. A performance of the piece in its original form can be seen in this video – “Conversation for two tambourines” by Bobby Lopez performed by Even Hembre and Arne Kristian Sundby (recorded by Even Hembre).


Since both performers had limited experience with live electronics in general we decided to introduce the cross adaptive system gradually during the session. The session started with headphone listening, followed by introducing different sound effects while giving visual feedback to the musicians, and then performing with adaptive processing before finally introducing cross-adaptive processing. As a starting point, we used analysis methods which had already proved effective and intuitive in earlier sessions (RMS, transient density and rhythmical consonance). These methods also made it easier to communicate and discuss the technical process with the musicians during the session. The system was set up to control time based effects such as delays and reverbs, but also typical insert effects like filters and overdrive. The effect control contained both dynamical changes of different effect parameters, but also sample/hold function through the MIDIator. We had also brought a foot pedal so the performers could change the effects on the different parts of the composition during the performance.


After we had prepared and set up the system we discovered severe latency on the outputs of the system. Input signals seemed to function properly, but what was causing the latency of the output was not discovered. To solve the problem, we made a fresh set-up using the same mentioned analysing methods and effects, and after checking that the latency was gone, the session proceeded. We started with a performance of the composition without any effects, but with the performers using headphones to get familiar with the situation. The direct sound of each tambourine was panned hard left/right in the monitoring system to easier identify the two performers. After an initial discussion it was decided that both tambourines should be located in the same room since the visual communication between the performers was important in this particular piece. The microphones were separated with an acoustic barrier/screen and microphones set to cardio characteristic in order to avoid as much bleeding between the two as possible. During the performance the MIDIator was adjusted to the incoming signals. It became clear that there were some issues with bleeding already at this stage affecting the analyser, but we nevertheless retained the set-up to maintain the focus on the performance aspect. The composition had large variations in dynamics, and also in movement of the instruments. This was seen as a challenge considering the microphones’ static placements and the consequently large differences in input signal. Because of the movement, just small distance variations between instrument and microphone would have great impact in how the analysis methods read the signals. During the set-up, the visual feedback from the screen to the performers was a very welcome contribution regarding the understanding of the set-up. While setting up the MIDIator to control the effects we tried playing through the composition again trying out different effects. Adding effects made a big impact to the performance. It became clear that the performers tried to “block out” the effects while playing in order to not loose track of how the piece was composed. In this case the effects almost created a filter between the performers and the composition resulting in a gap between what they expected and what they got. This could of course be a consequence of the effects that was chosen, but the situation demanded another angle to narrow everything down in order to create a better understanding and connection between the performance and the technology. Since the composition consisted of different parts we made a selection of one of the quieter parts where the musicians could see how their playing affected their analysers, and how this further could be mapped to different effects using the MIDIator. There was still a large amount of overlapping between the instruments into the analyser because of bleeding, so we needed to take a break and rearrange the physical set-up in the room to further clarify the connection between musical input, analyser, MIDIator and effects. Avoiding the microphone bleeding helped both the system and the musicians to clarify how the input reacted to the different effects. Since the performers were interested in how this changed the sound of their instruments we agreed to abandon the composition, and instead testing out different set-ups, both adaptive and crossadaptive.

Sound examples:

1. Trying different effects on tambourine, processing musician controlling all parameters. Tambourine 1 (Even) is convolved with a recording of water and a cymbal. Tambourine 2 (Arne Kristian) is processed with delay, convolved with a recording of small metal parts and a pitch delay.


2. Tambourine 1 (Even) is analysed using transient density. The transient density is controlling a delay plug in on tambourine 2 (Arne Kristian)


3. Tambourine 2 (Arne Kristian) is analysed by transient density controlling a send from tambourine 1 convolved with cymbal. The higher transient density the less send.


4. Keeping the mapping settings from example 2 and 3 but adding rhythmical consonance analyses on Tambourine 2 to control another send level from tambourine 1 convolving it with recording of water. The higher consonance the more send. The transient density analysis on tambourine 1 is in addition mapped to control a send from tambourine 2 convolving it with metal parts. The higher density, the more send.



Even though we worked with a composed piece it would be a good idea to have a “rehearsal” with the performers beforehand focusing on different directions through processing. This could open up for thoughts around how to do a new and meaningful interpretation of the same composition with the new elements.


It was a good idea to record the piece beforehand in order to construct the processing system, but this recording did not have any separation between the instruments either. This resulted in preparing and constructing a system that in theory were unable to be cross adaptive since it both analysed and processed the sum of both instruments leaving much less control to the individual musicians. This aspect, also concerning bleeding between microphones in more controlled environments, challenges a concept of fully controlling a cross adaptive performance. This challenge will probably be further magnified in a concert situation preforming through speakers. The musicians also noted that the separation between microphones was crucial for the understanding of the process, and the possibility to get a feeling of control.

In retrospect, the time-based effects prepared for this session could also be changed since several of them often worked against the intention of the composition, especially the most rhythmical parts. Even noted that: “Sometimes it’s like trying to speak with headphones that play your voice right after you have said the word, and that unable you to continue”.

This particular piece could probably benefit from more subtle changes from the processing. The sum of this made the interaction aspect between the performers and the technology more reduced. This became clearer when we abandoned the composition and concentrated on interaction in a more “free” setting. One way of going further into this particular composition could be to take a mixed music approach, and “recompose” and interpret it again with the processing element as a more included part of the composition process.

In the following and final part of the session, the musicians were allowed to freely improvise while being connected to the processing system. This was experienced as much more fruitful by both performers. The analysis algorithms focusing on rhythmical aspects, namely transient density and rhythmical consonance, were both experienced as meaningful and connected to the performers’ playing. These control parameters were mapped to effects like convolution and delay (cf. explanation of sound examples 1-4). The performers focused on issues of control, the differences between “normal” and inverse mapping, headphones monitoring and microphone bleeding when discussing their experiences of the session (see the video digest below for some highlights).

Video digest from session February 20, 2017

Leave a Reply

Your email address will not be published. Required fields are marked *