Seminar: Mixing and timbral character

Online conversation with Gary Bromham (London), Bernt Isak Wærstad (Oslo), Øyvind Brandtsegg (San Diego), Trond Engum and Andreas Bergsland (Trondheim). Gyrid N. Kaldestad, Oslo, was also invited but unable to participate.

The meeting revolves around the issues “mixing and timbral character” as related to the crossadaptive project. As there are many aspects of the project that touches upon these issues, we have kept the agenda quite open as of yet, but asking each participant to bring one problem/question/issue.

Mixing, masking

In Oslo they worked with the analysis parameters spectral crest and flux, aiming to use these to create a spectral “ducking” effect, where the actions of one instrument could selectively affect separate frequency bands of the other instrument. Gary is also interested in these kinds of techniques for mixing, to work with masking (allowing and/or avoiding masking). One could think if it as a multiband sidechaining with dynamic bands, like a de-esser, but adaptive to whichever frequency band currently needs modification. These techniques are related both to previous work on adaptive mixing (for example at QMUL) and also partially solved by recent commecial plugins, like Izotope Neutron.
However interesting these techniques are, the main focus of our current project is more on the performative application of adaptive and crossadaptive effects. That said, it could be fruitful using these techniques, not to solve old problems, but to find new working methods in the studio as well. In the scope of the project, this kind of creative studio work can be aimed at familiarizing ourselves with the crossadaptive methods in a controlled and repeatable setting. Bernt also brought up the issue of recording the analysis signals, using them perhaps as source material for creative automation, editing the recorded automation as one might see fit. This could be an effective way of familiarization with the analyzer output as well, as it invites taking a closer look at the details of the output of the different analysis methods. Recording the automation data is straightforward in any DAW, since the analyzer output comes into the DAW as external MIDI or OSC data. The project does not need to develop any custom tools to allow recording and editing of these signals, but it might be a very useful path of exploration in terms of working methods. I’d say yes please, go for it.

Working with composed material, post production

Trond had recently done a crossadaptive session with classical musicians, playing composed material. It seems that this, even though done “live” has much in common with applying crossadaptive techniques in post production or in mixing. This is because the interactive element is much less apparent. The composition is a set piece, so any changes to the instrumental timbre will not change what is played, but rather can influence the nuances of interpretation. Thus, it is much more a one-way process instead of a dialectic between material and performance. Experts on interpretation of composed music will perhaps cringe at this description, saying there is indeed a dialogue between interpretation and composition. While this is true, the degree to which the performed events can be changed is lesser within a set composition. In recent sessions, Trond felt that the adaptive effects would exist in a paralell world, outside of the composition’s aesthetic, something unrelated added on top. The same can be said about using adaptive and crossadaptive techniques in a mixing stage of a production, where all tracks are previously recorded and thus in a sense can be regarded as a set (non-changeable) source. With regards to applying analysis and modulation to recorded material, one could also mention that the Oslo sessions used recordings of the (instruments in the session) to explore the analysis dimensions. This was done as an initial exploratory phase of the session. The aim was finding features that already exist in the performer’s output, rather than imposing new dimensions of expression that the performer will need to adapt to.

On repeatability and pushing the system

The analysis-modulator response to an acoustic input is not always explicitly controllable. This is due to the nature of some of the analysis methods, technical weaknesses that introduce “flicker” or noise in the analyzer output. Even though these deviations are not inherently random, they are complex and sometimes chaotic. In spite of these technical weaknesses, we notice that our performers often will thrive. Musicians will often “go with the flow” and create on the spot, the interplay being energized by small surprises and tensions, both in the material and in the interactions. This will sometimes allow the use of analysis dimensions/methods that have spurious noise/flicker, still resulting in a consistent and coherent musical output, due to the performer’s experience in responding to a rich environment of sometimes contradicting signals. This touches one of the core aspects of our project, intervention into the traditional modes of interplay and musical communication. It also touches upon the transparency of the technology, how much should the performer be aware of the details of the signal chain? Sometimes rationalization makes us play safe. A fruitful scenario would be aiming for analysis-modulator mappings that create tension, something that intentionally disturbs and refreshes. The current status of our research leaves us with a seemingly unlimited amount of combinations and mappings, a rich field of possibilities, yet to be charted. The options are still so many that any attempt at conclusions about how it works or how to use it seems futile. Exploration in many directions is needed. This is not aimless exploration, but rather searching without knowing what can be found.

Listening, observing

Andreas mentions is is hard to pinpoint single issues in this rich field. As observer it can be hard to decode what is happening in the live setting. During sessions, it is sometimes a complex task following the exact details of the analysis and modulation. Then, when listening to the recorded tracks again later, it is easier to appreciate the musicality of the output. Perhaps not all details of the signal chain are cleanly defined and stringent in all aspects, but the resulting human interaction creates a lively musical output. As with other kinds of music making, it is easy to get caught up in detail at time of creation. Trying to listen more in a holistic manner, taking in the combined result, is a skill not to be forgotten also in our explorations.

Adaptive vs cross-adaptive

One way of working towards a better understanding of the signal interactions involved in our analyzer-modulator system is to do adaptive modulation rather than cross-adaptive. This brings a much more immediate mode of control to the performer, exploring how the extracted features can be utilized to change his or her own sound. It seems several of us have been eager to explore these techniques, but putting it off since it did not align with the primary stated goals of crossadaptivity and interaction. Now, looking at the complexity of the full crossadaptive situation, it is fair to say that exploration of adaptive techniques can serve as a very valid manner of getting in touch with the musical potential of feature-based modulation of any signal. In it’s own right, it can also be a powerful method of sonic control for a single performer, as an alternative to a large array of physical controllers (pedals, faders, switches). As mentioned earlier in this session, working with composed material or set mixes can be a challenge to the crossadaptive methods. Exploring adaptive techniques might be more fruitful in those settings. Working with adaptive effects also brings the attention to other possibilities of control for a single musician over his or her own sound. Some of the recent explorations of convolution with Jordan Morton shows the use of voice controlled crossadaptivity as applied to a musician’s own sound. In this case, the dual instrument of voice and bass operated by a single performer allows similar interactions between instruments, but bypassing the interaction between different people, thus simplifying the equation somewhat. This also brings our attention to using voice as a modulator for effects for instrumentalists not using voice as part of their primary musical output. Although this has been explored by several others (e.g. Jordi Janner, Stefano Fasciani, and also the recent Madrona Labs “Virta” synth) it is a valid and interesting aspect, integral to our project.

 

Convolution experiments with Jordan Morton

Jordan Morton is a bassist and singer, she regularly performs using both instruments combined. This provides an opportunity to explore how the liveconvolver can work when both the IR and the live input are generated by the same musician. We did a session at UCSD on February 22nd. Here are some reflections and audio excerpts from that session.

General reflections

As compared with playing with live processing, Jordan felt it was more “up to her” to make sensible use of the convolution instrument. With live processing being controlled by another musician, there is also a creative input from another source. In general, electronic additions to the instrument can sometimes add unexpected but desirable aspects to the performance. With live convolution where she is providing both signals, there is a triple (or quadruple) challenge: She needs to decide what to play on the bass, what to sing, explore how those two signals work together when convolved, and finally make it all work as a combined musical statement. It appears this is all manageable, but she’s not getting much help from the outside. In some ways, working with convolution could be compared to looping and overdubs, except the convolution is not static. One can overlay phrases and segments by recording them as IR’s, while shaping their spectral and temporal contour with the triggering sound (the one being convolved with the IR).
Jordan felt it easier to play bass through the vocal IR than the other way around. She tend to lead with the bass when playing acoustic on bass + vocals. The vocals are more an additional timbre added to complete harmonies etc with the bass providing the ground. Maybe the instrument playing through the IR has the opportunity of more actively shaping the musical outcome, while the IR record source is more a “provider” of an environment for the other to actively explore?
In some ways it can seem easier to manage the roles roles (of IR provider and convolution source) as one person than splitting the incentive among two performers. The roles becomes more separated when they are split between different performers than when one person has both roles and then switches between them. When having both roles, it can be easier to explore the nuances of each role. Possible to test out musical incentives by doing this here and then this there, instead of relying on the other person to immediately understand (for example to *keep* the IR, or to *replace* it *now*).

Technical issues

We explored transient triggered IR recording, but had a significant acoustic bleed from bass into the vocal microphone, which made clean transient trigging a bit difficult. A reliable transient triggered recording would be very convenient, as it would allow the performer to “just play”. We tried using manual triggering, controlled by Oeyvind. This works reliably but involves some guesswork as to what is intended to be recorded. As mentioned earlier (e.g. in the first Olso session), we could wish for a foot pedal trigger or other controller directly operated by the performer. Hey it’s easy to do, let’s just add one for next time.
We also explored continuous IR updates based on a metronome trigger. This allows periodic IR updates, in a seemingly streaming fashion. Jordan asked for an indication of the metronomic tempo for the updates, which is perfectly reasonable and would be a good idea to do (although had not been implemented yet). One distinct difference noted when using periodic IR updates is that the IR is always replaced. Thus, it is not possible to “linger” on an IR and explore the character of some interesting part of it. One could simulate such exploration by continuously re-recording similar sounds, but it might be more fruitful to have the ability to “hold” the IR, preventing updates while exploring one particular IR. This hold trigger could reasonably also be placed on a footswitch or other accessible control for the performer.

Audio excerpts

jordan1 jordan1

Take 1: Vocal IR, recording triggered by transient detection.

 

jordan2 jordan2

Take 2: Vocal IR, manually triggered recording 

 

jordan3 jordan3

Take 3: Vocal IR, periodic automatic trigger of IR recording.

 

jordan4 jordan4

Take 4: Vocal IR, periodic automatic trigger of IR recording (same setup as for take 3)

 

jordan5 jordan5

Take 5: Bass IR, transient triggered recording. Transient triggering worked much cleaner on the bass since there was less signal bleed from voice to bass than vice versa.

Session UCSD 14. Februar 2017

Liveconvolver4_trig

Session objective

The session objective was to explore the live convolver, how it can affect our playing together and how it can be used. New convolver functionality for this session is the ability to trigger IR update via transient detection, as opposed to manual triggering or periodic metro-triggered updates. The transient triggering is intended to make the IR updating more intuitive and providing a closer interaction between the two performers. We also did some quick exploration of adaptive effects processing (not cross-adaptive, just auto-adaptive). The crossadaptive interactions can sometimes be complex. One way to familiarize ourselves with the analysis methods and the modulation mappings could be to allow musicians to explore how these are directly applied to his or her own instrument.

Kyle Motl: bass
Oeyvind Brandtsegg: convolver/singer/tech/camera

Live convolver

Several takes were done, experimenting with manual and transient triggered IR recording. Switching between the role of “recording/providing the impulse response” and of “playing through, or on, the resulting convolver”. Reflections on these two distinct performative roles were particularly friutful and to some degree surprising. Technically, the two sound sources of audio convolution are equal, it does not matter which way the convolution is done (one sound with the other, or vice versa). The output sound will be the same. However, our liveconvolver does treat the two signals slightly differently, since one is buffered and used as the IR, while the other signal is directly applied as input to the convolver. The buffering can be updated at any time, in such a fashion that no perceptible extra delay occurs due to that part of the process. Still, the update needs to be triggered somehow. Some of the difference in roles occur due to the need for (and complications of) the triggering mechanism, but perhaps the deepest difference occurs due to something else. There is a performative difference in the action of providing an impulse response for the other one to use, and the action of directly playing through the IR left by the other. Technically, the difference is minute, due to the streamlined and fast IR update. Perhaps also the sounding result will be perceptually indistinguishable for an outside listener. Still the feeling for the performer is different within those two roles.  We noted that one might naturally play different type of things, different kinds of musical gestures, in the two different roles. This inclination can be overcome by intentionally doing what would belong to the other role, but it seem the intuitive reaction to the role is different in each case.

Video: a brief glimpse into the session environment.

 

conv1_mix conv1_mix

Take 1: IR recorded from vocals, with a combination of manual and transient triggering. The bass is convolved with the live vocal IR. No direct (dry) signals was recorded, only the convolver output. Later takes in the session also recorded the direct sound from each instrument, which makes it easier to identify the different contributions to the convolution. This take serves more as a starting point from where we continued working.

 

conv2_mix conv2_mix

Take 2: Switched roles, so IR is now recorded from the bass, and the vocals are convolved with this live updated IR. The IR updates were triggered by transient detection of the bass signal.

 

conv3_mix conv3_mix

Take 3: As  for take 2, the IR is recorded from bass. Changed bass mic to try to reduce feedback, adjusted transient triggering parameters so that IR recording would be more responsive

 

Video: Reflections on IR recording, on the roles of providing the IR as opposed to being convolved by it.

Kyle noticed that he would play different things when recording IR than when playing through an IR recorded by the vocals. Recording an IR, he would play more percussive impulses, and for playing through the IR he would explore the timbre with more sustained sounds. In part, this might be an effect of the transient triggering, as he would have to play a transient to start the recording. Because of this we also did one recording of manually triggered IR recording with Kyle intentionally exploring more sustained sounds as source for the IR recording. This seems to even out the difference (between recording IR and playing through it) somewhat, but there is still a performatively different feeling between the two modes.
When having the role of “IR recorder/provider”, one can be very active and continuously replace the IR, or leave it “as is” for a while, letting the other musician explore the potential in this current IR. Being more active and continuously replacing the IR allows for a closer musical interaction, responding quickly to each other. Still, the IR is segmented in time, so the “IR provider” can only leave bits and pieces for the other musician to use, while the other musician can directly project his sounds through the impulse responses left by the provider.

 

conv4_mix conv4_mix

Take 4:IR is recorded from bass. Manual triggering of the IR recording (controlled by a button), to explore the use of more sustained impulse responses.

 

Video: Reflections on manually triggering the IR update, and on the specifics of transient triggered updates.

 

conv5_mix conv5_mix

Take 5: Switching roles again, so that the IR is now provided by the vocals and the bass is convolved. Transient triggered IR updates, so every time a vocal utterance starts, the IR is updated. Towards the end of the take, the potential for faster interaction is briefly explored.

 

Video: Reflections on vocal IR recording and on the last take.

Convolution sound quality issues

The nature of convolution will sometimes create a muddy sounding audio output. The process will dampen high frequency content and emphasize lower frequencis. Areas of spectral overlap between two two signals will also be emphasized, and this can create a somewhat imbalanced output spectrum. As the temporal features of both sounds are also “smeared” by the other sound, this additionally contributes to the potential for a cloudy mush. It is welll known that brightening the input sounds prior to convolution can alleviate some of these problems. Further refinements have been done recently by Donahue, Erbe and Puckette in the ICMC paper “Extended Convolution Techniques for Cross-Synthesis”. Although some of the proposed techniques does not allow realtime processing, the broader ideas can most certainly be adapted. We will explore this further potential for refinement of our convolver technique.

As can be heard in the recordings from this session, there is also a significant feedback potential when using convolution in a live environment and the IR is sampled in the same room as it is directly applied. The recordings were made with both musicians listening to the convolver output over speakers in the room. If we had been using headphones, the feedback would not have been a problem, but we wanted to explore the feeling of playing with it in a real/live performance setting. Oeyvind would control simple hipass and lowpass filtering of the convolver output during performance, and thus had a rudimentary means of manually reducing feedback. Still, once unwanted resonances are captured by the convolution system, they will linger for a while in the system output. Nothing has been done to repair or reduce the feedback in these recordings, we keep it as a strong reminder that it is something that needs to be fixed in the performance setup. Possible solutions consist of exploring traditional feedback reduction techniques, but it could also be possible to do an automatic equalization based on the accumulated spectral content of the IR. This latter approach might also help output scaling and general spectral balance, since already prominent frequencies would have less poential to create strong resonances.

Adaptive processing

As a way to investigate and familiarize ourselves with the different analysis features and the modulation mappings of these signals, we tried to work on auto-adaptive processing. Here, features of the audio input affects effect processing of the same signal. The performer can then more closely interact with the effects and explore how different playing techniques are captured by the analysis methods.

 

cut_ad_dly1 cut_ad_dly1

Adaptive take 1: Delay effect with spectral shift. Short (constant) delay time, like a slapback delay or comb filter. Envelope crest controls the cutoff frequency of a lowpass filter inside the delay loop. Spectral flux controls the delay feedback amount. Transient density controls a frequency shifter on the delay line output.

 

cut_ad_rvb1 cut_ad_rvb1

Adaptive take 2: Reverb. Rms (amplitude) controls reverb size. Transient density controls the cutoff frequency of a highpass filter applied after the reverb, so that higher density playing will remove low frequencies from the reverb. Envelope crest controls a similarly applied lowpass filter, so that more dynamic playing will remove high frequencies from the reverb.

 

cut_ad_hadron1 cut_ad_hadron1

Adaptive take 3: Hadron. Granular processing where the effect has its own multidimensional mapping from input controls to effect parameters. The details of the mapping is more complex. The resulting effect is that we have 4 distinctly different effect processing settings, where the X and Y axis of a 2D control surface provides a weighted interpolation between these 4 settings. Transient density controls the X axis, and envelope crest controls the Y axis. A live excerpt of the controls surface is provided in the video below.

Video of the Hadron Particle Synthesizer control surface controlled by bass transient density and envelope crest.

Some comments on analysis methods

The simple analysis parameters, like rms amplitude and transient density works well on all (most) signals. However, other analysis dimensions (e.g. spectral flux, pitch, etc) have a more inconsistent relation between signal and analysis output when used on different types of signals. They will perform well on some instrument signals and less reliably on others. Many of the current analysis signals have been developed and tuned with a vocal signal, and many of them do not work so consistently for example on a bass signal. Due to this, the auto-adaptive control (as shown in this session) is sometimes a little bit “flaky”. The auto-adaptive experiments seems a good way to discover such irregularities and inconsistencies in the analyzer output. Still, we also have a dawning realization that musicians can thrive with some “livelyness” in the control output. Some surprises and quick turns of events can provide energy and creative input for a performer. We saw this also in the Trondheim session where rhythm analysis was explored, and in the discussion of this in the follow-up seminar. There, Oeyvind stated that the output of the rhythm analyzer was not completely reliable, but the musicians stated they were happy with the kind of control it gave, and that it felt intuitive to play with. Even though the analysis sometimes fail or misinterprets what is being played, the performing musician will react to whatever the system gives. This is perhaps even more interesting (for the musician), says Kyle. It creates some sort of tension, something not entirely predictable. This unpredictability is not the same as random noise. There is a difference between something truly random and something very complex (like one could say about an analysis system that misinterprets the input). The analyzer would react the same way to an identical signal, but give unproportionally large variance in the output due to small variances in the input. Thus it is a nonlinear complex response from the analyzer. In the technical sense it is controllable and predictable, but it is very hard to attain precise and determined control on a real world signal. The variations and spurious misinterpretations creates a resistance for the performer, something that creates energy and drive.

 

 

 

Seminar 16. December

Philosophical and aesthetical perspectives

–report from meeting 16/12 Trondheim/Skype

Andreas Bergsland, Trond Engum, Tone Åse, Simon Emmerson, Øyvind Brandtsegg, Mats Claesson

The performers experiences of control:

In the last session (Trondheim December session) Tone and Carl Haakon (CH) worked with rhythmic regularity and irregularity as parameters in the analysis. They worked with the same kind of analysis, and the same kind of mapping analysis to effect parameter. After first trying the opposite, they ended up with: Regularity= less effect, Irregularity= more. They also included a sample/hold/freeze effect in one of the exercises. Øyvind commented on how Tone in the video stated that she thought it would be hard to play with so little control, but that she experienced that they worked intuitively with this parameter, which he found was an interesting contradiction. Tone also expressed in the video that on the one side she would sometimes hope for some specific musical choices from CH (“I hope he understands”) but on the other hand that she “enjoyed the surprises”. These observations became a springboard for a conversation about a core issue in the project: the relationship between control and surprise, or between controlling and being controlled. We try to point here to the degree of specific and conscious intentional control, as opposed to “what just happens” due to technological, systemic, or accidental reasons.  The experience from the Trondheim December session was that the musicians preferred what they experienced as an intuitive connection between input and outcome, and that this facilitated the process in the way that they could “act musically”. (This “intuitive connection” is easily related to Simon’s comment about “making ecological sense” later in this discussion.)  Mats commented that in the first Oslo session the performers stated that they felt a similarity to playing with an acoustic instrument. He wondered if this experience had to do with the musicians’ involvement in the system setup, while Trond pointed out that the Trondheim December session and Oslo session were pretty similar in this respect. A further discussion about what “control”, “alienation” and “intuitive playing” can mean in these situations seems appropriate.

Aesthetic and individual variables

This led to a further discussion about how we should be aware that the need for generalising and categorising – which is necessary at some point to actually be able to discuss matters – can lead us to overlook important variable parameters such as:

  • Each performer’s background, skills, working methods, aesthetics and preferences
  • That styles and genres relate differently to this interplay

A good example of this is Maja’s statement in the Brak/Rug session that she preferred the surprising, disturbing effects, which gave her new energy and ideas. Tone noted that this is very easy to understand when you have heard Maja’s music, and even easier if you know her as an artist and person. And it can be looked upon as a contrast to Tone/CH who seek a more “natural” connection between action and sounding result, in principle they want the technology to enhance what they are already doing. But, as cited above, Tone commented that this is not the whole truth. Surprises are also welcome in the Tone/Carl Haakon collaboration.

Simon underlined, because of these variables, the need to pin down in each session what actually happens, and not necessarily set up dialectical pairs. Øyvind pointed out, on the other hand, the need to lay out possible extremes and oppositions to create some dimensions (and terms) along which language can be used to reflect on the matters at hand.

Analysing or experiencing?

Another individual variable, both as audience and performer, is the need to analyse, to understand what is happening in the perceiving of a performance. One example brought up related to this was Andreas’ experience of his change of audience perspective after he studied ear training. This new knowledge led him to take an analysing perspective, wanting to know what happened in a composition when performed. He also said: “as an audience you want to know things, you analyse”. Simon referred to “the inner game of tennis” as another example: how it is possible to stop appreciating playing tennis because you become too occupied analysing the game – thinking of the previous shot rather than clearing the mind ready for the next. Tone pointed at the individual differences between performers, even within the same genre (like harmonic jazz improvisation) – some are very analytic, also in the moment of performing, while others are not. This also goes for the various groups of audiences, some are analytic, some are not – and there is also most likely a continuum between the analytic and the intuitive musician/audience.  Øyvind mentioned experiences from presenting the crossadaptive project to several audiences over the last few months. One of the issues he would usually present is that it can be hard for the audience to follow the crossadaptive transformations, since it is an unfamiliar mode of musical (ex)change.  However, responses to some of the simpler examples he then played (e.g. amplitude controlling reverb size and delay feedback), yielded the response that it was not hard to follow. One of the places where this happened was Santa Barbara, where Curtis Roads commented he thought it quite simple and straightforward to follow. Then again, in the following discussion, Roads also conceded that it was simple because the mapping between analysis parameter and modulated effect was known. Most likely it would be much harder to deduce what the connection was just by listening alone, since the connection (mapping) can be anything. Cross Adaptive processing may be a complicated situation, not easy to analyse either for the audience or the performer. Øyvind pointed towards differences in parameters, as we had also discussed collectively: That some were more “natural” than others, like the balance between amplitude and effect, while some are more abstract, like the balance between noise and tone, or regular/irregular rhythms.

Making ecological sense/playing with expectations

Simon pointed out that we have a long history of connections to sound: some connections are musically intuitive because we have used them perhaps for thousands of years, they make ‘ecological’ sense to us. He referred to Eric Clarke’s “ Ways of listening: An ecological approach to the perception of musical meaning (2005)” and William Gaver “What in the world do we hear?: An ecological approach to auditory event perception” (1993). We come with expectations towards the world, and one way of making art is playing with those expectations. In modernist thinking there is a tendency to think of all musical parameters as equal – or at least equally organised – which may easily undermine their “ecological validity” – although that need not stop the creation of ‘good music’ in creative hands.

Complexity and connections

So, if the need for conscious analysis and understanding will vary between musicians, is this the same for the experienced connection between input and output? And what about the difference between playing and listening as part of the process, or just listening, either as a musician, musicologist, or an audience member? For Tone and Carl Haakon it seemed like a shared experience that playing with regularity/non- regularity felt intuitive for both – while this was actually hard for Øyvind to believe, because he knew the current weakness in how he had implemented the analysis. Parts of the rhythmic analysis methods implemented are very noisy, meaning they produce results that sometimes can have significant (even huge) errors in relation to a human interpretation of the rhythms being analysed. The fact that the musicians still experienced the analyses as responding intuitively is interesting, and it could be connected to something Mats said later on: “the musicians listen in another way, because they have a direct contact with what is really happening”. So, perhaps, while Tone &CH experienced that some things really made musical sense, Øyvind focused on what didn’t work – which would be easier for him to hear?  So how do we understand this, and how is the analysis connected to the sounding result? Andreas pointed out that there is a difference between hearing and analysing: you can learn how the sound behaves and work with that. It might still be difficult to predict exactly what will happen.Tone’s comment here was that you can relate to a certain unpredictability and still have a sort of control over some larger “groups of possible sound results“ that you can relate to as a musician. There is not only an urge to “make sense” (= to understand and “know” the connection) but also an urge to make “aesthetical sense”.

With regards to the experienced complexity, Øyvind also commented that the analysis of a real musical signal is in many ways a gross simplification, and by trying to make sense of the simplification we might actually experience it as more complex. The natural multidimensionality of the experienced sound is lost, due to the singular focus on one extracted feature. We are reinterpreting the sound as something simpler. An example mentioned was the vibrato, which is a complex input and a complex analysis, that could in some analyses be reduced to a simple “more or less” dimension. This issue also relates to the needs of our project to construct new methods of analysis, so that we can try to find analysis dimensions that correspond to some perceptual or experiential features.

Andreas commented “It is quite difficult to really know what is going on without having knowledge of the system and the processes. Even simple mappings can be difficult to grasp only by ear”. Trond reminded us after the meeting about the further complexity that was perhaps not so present in our discussion: we do not improvise with only one parameter “out of control” (adaptive processing). In the cross adaptive situation someone else is processing our own instrument, so we do not have full control over this output, and at the same time we do not have any control over what we are processing, the input (cross- adapting), which in both cases could represent an alienation and perhaps a disconnection from the input-result- relation. And of course the experience of control is also connected to “understanding” the processing analysis you are working with.

The process of interplay:

Øyvind referred to Tone’s experience of a musical “need” during the Trondheim session, expressed: ”I hope he understands…” –  when she talked about the processes in the interplay. This was pointing at how you realise during the interplay that you have very clear musical expectations and wishes towards the other performer.  This is not in principle different from a lot of other musical improvising situations. Still, because you are dependent on the other’s response in a way that is defining not only the wholeness, but your own part in it, this thought seemed to be more present than is usual in this type of interplay.

Tools and setup

Mats commented that very many of the effects that are used are about room size, and that he felt that this had some – to him – unwanted aesthetical consequences. Øyvind responded that he wanted to start with effects that are easy to control and easy to hear the control of. Delay feedback and reverb size are such effects. Mats also suggested that it was an important aesthetical choice not to have effects all the time, and thereby have the possibility to hear the instrument itself. So to what extent should you be able to choose? We discussed the practical possibilities here: some of the musicians (for example Bjørnar Habbestad) have suggested a foot pedal,  where the musician could control the degree to which their actions will inflict changes on the other musician’s sound (or the other way around, control the degree to which other forces can affect his sound). Trond suggested one could also have control over the level of signal/output  for the effects, adjusting the balance between processed and unprocessed sound. As Øyvind commented, these types of control could be a pedagogical tool for rehearsing with the effect, turning the processing on and off to understand the mapping better. The tools are of course partly defining the musician’s balance between control, predictability and alienation. Connected to this, we had a short discussion regarding amplified sound in general, that the instrumental sound coming from a speaker located elsewhere in the room in itself could already represent an alienation. Simon referred to the Lawrence Casserley /Evan Parker principle of “each performer’s own processor”, and the situation before the age of the big PA, where the electronic sound could be localised to each musician’s individual output. We discussed possibilities and difficulties with this in a cross adaptive setting: which signal should come out of your speaker? The processed sound of the other, or the result of the other processing you? Or both? and then what would the function be – the placement of the sound is already disturbed.

Rhythm

New in this session was the use of the rhythmical analysis. This is very different from all other parameters we have implemented so far. Other analyses relate to the immediate sonic character, but rhythmic analysis tries to extract some  temporal features, patterns and behaviours. Since much of the music played in this project is not based on a steady pulse, and even less confined to a regular grid (meter), the traditional methods of rhythmic analysis will not be appropriate. Traditionally one will find the basic pulse, then deduce some form of meter based on the activity, and after this is done one can relate further activity to this pulse and meter. In our rhythmical analysis methods we have tried to avoid the need to first determine pulse and meter, but rather looked into the immediate time relationships between neighbouring events. This creates much less support for any hypothesis the analyser might have about the rhythmical activity, but also allows much greater freedom of variation (stylistically, musically) in the input. Øyvind is really not satisfied with the current status of the rhythmic analysis (even if he is the one mainly responsible for the design), but he was eager to hear how it worked when used by Tone and Carl Haakon.  It seems that the live use by real musicians allowed the weaknesses of the analyses to be somewhat covered up. The musicians reported that they felt the system responded quite well (and predictably) to their playing. This indicates that, even if refinements are much needed, the current approach is probably a useful one. One thing that we can say for sure is that some sort of rhythmical analysis is an interesting area of further exploration, and that it can encode some perceptual and experiential features of the musical signal in ways that make sense to the performers. And if it makes sense to the performers, we might guess that it will have the possibility of making sense to the listener as well.

Andreas: How do you define regularity (ex. claves-based musics), how “less regular” is that from a steady beat.

Simon: If you ask a difficult question with a range of possible answers this will be difficult to implement within the project.

As a follow up to the refinement of rhythmic analysis, Øyvind asked:  how would *you* analyze rhythm?

Simon: I wouldn’t analyze rhythm for example timeline in African music: a guiding pulse that is not necessarily performed and may exist only in the performer’s head. (This relates directly to Andreas’s next point – Simon later withdrew the idea that he would not analyse rhythm and acknowledged its usefulness in performance practice.)

Andreas: Rhythm is a very complex phenomenon, which involves multiple interconnected temporal levels, often hierarchically organised. Perceptually, we have many ongoing processes involving present, past and anticipations about future events. It might be difficult to emulate such processes in software analysis. Perhaps pattern recognition algorithms can be good for analysing rhythmical features?

Mats: what is rhythm? In our examples: gesture may be more useful than rhythm

Øyvind: rhythm is repeatability, perhaps? Maybe we interpret this in the second after

Simon: no I think we interpret virtually at the same time

Tone: I think of it as a bodily experience first and foremost.  (Thinking about this in retrospect, Tone adds: The impulses -when they are experienced as related to each other- creates a movement in the body. I register that through the years (working with non-metric rhythms in free improvisation) there is less need for a periodical set of impulses to start this movement. (But when I look at babies and toddlers, I recognise this bodily reaction to irregular impulses.) I recognice what Andreas says- the movement is trigged by the expectation of more to come. (Think about the difference in your body when you wait for the next impulse to come (anticipations) and when you know that it is over….)

Andreas: When you listen to rhythms, you group events on different levels and relate to what was already played. The grouping is often referred to as «chunking» (psychology). Thus, it works both on an immediate level (now) as well as a more overarching level (bar, subsection, section) because we have to relate what we hear to the earlier. You can simplify or make it complex

Concerts and presentations, fall 2016

A number of concerts, presentations and workshops were given during October and November 2016. We could call it the 2016 Crossadaptive Transatlantic tour, but we won’t. This post gives a brief overview.

Concerts in Trondheim and Göteborg

BRAK/RUG was scheduled for a concert (with a preceding lecture/presentation) at Rockheim, Trondheim on 21. October. Unfortunately, our drummer Siv became ill and could not play. At 5 in the afternoon (concert start at 7) we called Ola Djupvik to ask if he could sit in with us. Ola has experience from playing in a musical setting with live processing and crossadaptive processing, for example the Session 20. – 21 September,  and also from performing with music technology students Ada Mathea Hoel, Øystein Marker and others. We were very happy and grateful for his courage to step in on such short notice. Here’s and excerpt from the presentation that night, showing vocal pitch controlling reverb on the drums (high pitch means smaller reverb size), transient density on the drums controlling delay feedback on the vocals (faster playing means less feedback).

There is a significant amount of crossbleed between vocals and drums, so the crossadaptivity is quite flaky. We still have some work to do on source separation to make this work well when playing live with a PA system.

Crossadaptive demo at Rockheim mix_maja_ola_cross_rockheim_ptmstr

 

Thanks to Tor Breivik for recording the Rockheim event. The clip here shows only the crossadaptive demonstration. The full concert is available on Soundcloud

brak_trio_rockheim1
Brandtsegg, Ratkje, Djupvik trio at Rockheim

The day after the Trondheim concert, we played at the Göteborg Art Sounds festival. Now, Siv was feeling better and was able to play. Very nice venue at Stora Teatern. This show was not recorded.

And then we take… the US

The crossadaptive project was presented  at the Transatalantic Forum in Chicago on October 24, in a special session titled “Sensational Design: Space, Media, and the Senses”. Both Sigurd Saue, Trond Engum and myself (Øyvind Brandtsegg) took part in the presentation, showing the many-faceted aspects of our work. Being a team of three people also helped the networking effort that is naturally a part of such a forum. During our stay in Chicago, we also visited the School of the Art Institute of Chicago, meeting Nicolas Collins, Shawn Decker, Lou Mallozzi, and Bob Snyder to start working on exchange programs for both students and faculty. Later in the week, Brandtsegg did a presentation of the crossadaptive project during a SAIC class on audio projects.

sigurd_and_bob
Sigurd Saue and Bob Snyder at SAIC

After Chicago, Engum and Saue went to Trondheim, while I traveled further on to San Francisco, Los Angeles, Santa Barabara, and then finally to San Diego.
In the Bay area, after jamming with Joel Davel in Paul Dresher’s studio, and playing a concert with Matt Ingalls and Ken Ueno at Tom’s Place, I presented the crossadaptive project at CCRMA, Stanford University on November 2.  The presentations seemed well received and spurred a long discussion where we touched on the use of MFCC’s, ratios and critical bands, stabilizing of the peaks of rhythmic autocorrelation, the difference of the correlation between two inputs (to get to the details of each signal), and more. Getting the opportunity to discuss audio analysis with this crowd was a treat.  I also got the opportunity to go back the day after to look at student projects, which I find gives a nice feel of the vibe of the institution. There is a video of the presentation here

After Stanford, I also did a presentation at the beautiful CNMAT at UC Berkeley, with Ed Campion, Rama Gottfried, a group of enthusiastic students. There I also met colleague P.A. Nilsson from Göteborg, as he was on a residency there. P.A.’s current focus on technology to intervene and structure improvisations is closely related to some of the implications of our project.

cnmat
CNMAT, UC Berkeley

On November 7 and 8, I did workshops at California Institute of the Arts, invited by Amy Knoles. In addition to presenting the technologies involved, we did practical studies where the students played in processed settings and experienced the musical potential and also the different considerations involved in this kind of performance.

calarts_workshop
Calarts workshops

Clint Dodson and Øyvind Brandtsegg experimenting together at CalArts

At UC Santa Barbara, I did a presentation in Studio Xenakis on November 9. There, I met with Curtis Roads, Andres Cabrera, and a broad range of their colleagues and students. With regards to the listening to crossadaptive performances, Curtis Roads made a precise observation that it is relatively easy to follow if one knows the mappings, but it could be hard to decode the mapping just by listening to the results. In Santa Barbara I also got to meet Owen Campbell, who did a master thesis on crossadaptive and got insight into his research and software solutions. His work on ADEPT was also presented at the AES workshop on intelligent music production at Queen Mary University this September, where Owen also met our student Iver Jordal, presenting his research on artificial intelligence in crossadaptive processing.

San Diego

Back in San Diego, I did a combined presentation and concert for the computer music forum on November 17.  I had the pleasure of playing together with Kyle Motl on double bass for this performance.

sd_kyle_and_oyvind
Kyle Motl and Øyvind Brandtsegg, UC San Diego

We demonstrated both live processing and crossadaptive processing between voice and bass.  There was a rich discussion with the audience. We touched on issues of learning (one by one parameter, or learning a combined and complex parameter set like one would do on an acoustic instrument), etudes, inverted mapping sometimes being more musically intuitive, how this can make a musician pay more attention to each other than to self (frustrating or liberating?), and tuning of the range and shape of parameter mappings (still seems to be a bit on/off sometimes, with relatively low resolution in the middle range).

First we did an example of a simple mapping:
Vocal amplitude reduce reverb size for bass,
Bass amplitude reduce delay feedback on vocals

Kyle and Oeyvind Simple mix_sd_nov_17_3_cross_ptmstr

 

Then a more complex example:
Vocal transient density -> Bass filter frequency  of a lowpass filter
Vocal pitch -> Bass delay filter frequency
Vocal percussive -> Bass delay feedback
Bass transient density -> Vocal reverb size (less)
Bass pitch+centroid -> Vocal tremolo speed
Bass noisiness -> Vocal tremolo grain size (less)

K&O complex mapping mix_sd_nov_17_4_cross_ptmstr

 

We also demonstrated another and more direct kind of crossadaptive processing, when doing convolution with live sampled impulse response. Oeyvind manually controlled the IR live sampling of sections from Kylse’s playing, and also triggered the convolver with tapping and scratching on a small wooden box with a piezo microphone. The wooden box source is not heard directly in the recording, but the resulting convolution is. No other processing is done, just the convolution process.

K&O, convolution mix_sd_nov_17_2b_conv_ptmstr

 

We also played a longer track of regular live processing this evening. This track is available on Soundcloud

Thanks to UCSD and recording engineers Kevin Dibella and James Forest Reid for recording the Nov 17 event.