Seminar: Mixing and timbral character

Online conversation with Gary Bromham (London), Bernt Isak Wærstad (Oslo), Øyvind Brandtsegg (San Diego), Trond Engum and Andreas Bergsland (Trondheim). Gyrid N. Kaldestad, Oslo, was also invited but unable to participate.

The meeting revolves around the issues “mixing and timbral character” as related to the crossadaptive project. As there are many aspects of the project that touches upon these issues, we have kept the agenda quite open as of yet, but asking each participant to bring one problem/question/issue.

Mixing, masking

In Oslo they worked with the analysis parameters spectral crest and flux, aiming to use these to create a spectral “ducking” effect, where the actions of one instrument could selectively affect separate frequency bands of the other instrument. Gary is also interested in these kinds of techniques for mixing, to work with masking (allowing and/or avoiding masking). One could think if it as a multiband sidechaining with dynamic bands, like a de-esser, but adaptive to whichever frequency band currently needs modification. These techniques are related both to previous work on adaptive mixing (for example at QMUL) and also partially solved by recent commecial plugins, like Izotope Neutron.
However interesting these techniques are, the main focus of our current project is more on the performative application of adaptive and crossadaptive effects. That said, it could be fruitful using these techniques, not to solve old problems, but to find new working methods in the studio as well. In the scope of the project, this kind of creative studio work can be aimed at familiarizing ourselves with the crossadaptive methods in a controlled and repeatable setting. Bernt also brought up the issue of recording the analysis signals, using them perhaps as source material for creative automation, editing the recorded automation as one might see fit. This could be an effective way of familiarization with the analyzer output as well, as it invites taking a closer look at the details of the output of the different analysis methods. Recording the automation data is straightforward in any DAW, since the analyzer output comes into the DAW as external MIDI or OSC data. The project does not need to develop any custom tools to allow recording and editing of these signals, but it might be a very useful path of exploration in terms of working methods. I’d say yes please, go for it.

Working with composed material, post production

Trond had recently done a crossadaptive session with classical musicians, playing composed material. It seems that this, even though done “live” has much in common with applying crossadaptive techniques in post production or in mixing. This is because the interactive element is much less apparent. The composition is a set piece, so any changes to the instrumental timbre will not change what is played, but rather can influence the nuances of interpretation. Thus, it is much more a one-way process instead of a dialectic between material and performance. Experts on interpretation of composed music will perhaps cringe at this description, saying there is indeed a dialogue between interpretation and composition. While this is true, the degree to which the performed events can be changed is lesser within a set composition. In recent sessions, Trond felt that the adaptive effects would exist in a paralell world, outside of the composition’s aesthetic, something unrelated added on top. The same can be said about using adaptive and crossadaptive techniques in a mixing stage of a production, where all tracks are previously recorded and thus in a sense can be regarded as a set (non-changeable) source. With regards to applying analysis and modulation to recorded material, one could also mention that the Oslo sessions used recordings of the (instruments in the session) to explore the analysis dimensions. This was done as an initial exploratory phase of the session. The aim was finding features that already exist in the performer’s output, rather than imposing new dimensions of expression that the performer will need to adapt to.

On repeatability and pushing the system

The analysis-modulator response to an acoustic input is not always explicitly controllable. This is due to the nature of some of the analysis methods, technical weaknesses that introduce “flicker” or noise in the analyzer output. Even though these deviations are not inherently random, they are complex and sometimes chaotic. In spite of these technical weaknesses, we notice that our performers often will thrive. Musicians will often “go with the flow” and create on the spot, the interplay being energized by small surprises and tensions, both in the material and in the interactions. This will sometimes allow the use of analysis dimensions/methods that have spurious noise/flicker, still resulting in a consistent and coherent musical output, due to the performer’s experience in responding to a rich environment of sometimes contradicting signals. This touches one of the core aspects of our project, intervention into the traditional modes of interplay and musical communication. It also touches upon the transparency of the technology, how much should the performer be aware of the details of the signal chain? Sometimes rationalization makes us play safe. A fruitful scenario would be aiming for analysis-modulator mappings that create tension, something that intentionally disturbs and refreshes. The current status of our research leaves us with a seemingly unlimited amount of combinations and mappings, a rich field of possibilities, yet to be charted. The options are still so many that any attempt at conclusions about how it works or how to use it seems futile. Exploration in many directions is needed. This is not aimless exploration, but rather searching without knowing what can be found.

Listening, observing

Andreas mentions is is hard to pinpoint single issues in this rich field. As observer it can be hard to decode what is happening in the live setting. During sessions, it is sometimes a complex task following the exact details of the analysis and modulation. Then, when listening to the recorded tracks again later, it is easier to appreciate the musicality of the output. Perhaps not all details of the signal chain are cleanly defined and stringent in all aspects, but the resulting human interaction creates a lively musical output. As with other kinds of music making, it is easy to get caught up in detail at time of creation. Trying to listen more in a holistic manner, taking in the combined result, is a skill not to be forgotten also in our explorations.

Adaptive vs cross-adaptive

One way of working towards a better understanding of the signal interactions involved in our analyzer-modulator system is to do adaptive modulation rather than cross-adaptive. This brings a much more immediate mode of control to the performer, exploring how the extracted features can be utilized to change his or her own sound. It seems several of us have been eager to explore these techniques, but putting it off since it did not align with the primary stated goals of crossadaptivity and interaction. Now, looking at the complexity of the full crossadaptive situation, it is fair to say that exploration of adaptive techniques can serve as a very valid manner of getting in touch with the musical potential of feature-based modulation of any signal. In it’s own right, it can also be a powerful method of sonic control for a single performer, as an alternative to a large array of physical controllers (pedals, faders, switches). As mentioned earlier in this session, working with composed material or set mixes can be a challenge to the crossadaptive methods. Exploring adaptive techniques might be more fruitful in those settings. Working with adaptive effects also brings the attention to other possibilities of control for a single musician over his or her own sound. Some of the recent explorations of convolution with Jordan Morton shows the use of voice controlled crossadaptivity as applied to a musician’s own sound. In this case, the dual instrument of voice and bass operated by a single performer allows similar interactions between instruments, but bypassing the interaction between different people, thus simplifying the equation somewhat. This also brings our attention to using voice as a modulator for effects for instrumentalists not using voice as part of their primary musical output. Although this has been explored by several others (e.g. Jordi Janner, Stefano Fasciani, and also the recent Madrona Labs “Virta” synth) it is a valid and interesting aspect, integral to our project.


Leave a Reply

Your email address will not be published. Required fields are marked *