Mixing with Gary

During our week in London we had some sessions with Gary Bromham, first at the Academy of Contemporary Music in Guildford on the June 7th , then at QMUL later in the week. We wanted to experiment with cross-adpative techniques in a traditional mixing session. Using our tools/plugins within a Logic session to work similar to the traditional sidechaining, but with the expanded palette of analysis and modulator mappings enabled by our tools developed in the project. Initially we tried to set this up with Logic as the DAW. It kind of works, but seems utterly unreliable. Logic would not respond to learned controller mappings after we close the session and reopen it. It does receive the MIDI controller signal (and can re-learn) but in all cases refuse to respond to the received automation control. In the end we abandoned Logic altogether and went for our safe always-does-the-job Reaper.

As the test session for our experiments we used Sheryl Crow “Soak up”, using stems for the backing tracks and the vocals.


Example 1: Vocal pitch to reverb send and reverb decay time.


Example 2: Vocal pitch as above. Adding vocal flux to hi cut frequency for the rest of the band. Rhythmic analysis (transient density) of the backing track controls a peaking EQ sweep on the vocals, creating a sweeping effect somewhat like a phaser. This is all somewhat odd all together, but useful as an controlled experiment in polyphonic crossadaptive modulation. The

* First thing Gary ask is to process one track according to the energy in a specific frequency band in another. For example “if I remove 150Hz on the bass drum, I want it to be added to the bass guitar”.  Now, it is not so easy to analyze what is missing, but easier to analyze what is there. So we thought of another thing to try; Sibliants (e.g. S’es) on the Vocals can be problematic when sent to reverb or delay effects. Since we don’t have a multiband envelope follower (yet), we tried to analyze for spectral flux or crest, then use that control signal to duck the reverb send for the vocals.

* We had some latency problems, relating to pitch tracking of vocals, the modulator signal arriving a bit late to precisely control the reverb size for the vocals. The tracking is ok, but the effect responds *after* the high pitch. This was solved by delaying the vocal *after* it was sent to the analyzer, then also delaying the direct vocal signal and the rest of the mix accordingly.

* Trond idea for later: Use vocal amp to control bitcrush mix on drums (and other programmed tracks)

* Trond idea for later: Use vocal transient density to control delay setting (delay time, … or delay mix)

* Bouncing the mix: Bouncing does not work, as we need the external modulation processing (analyzer and MIDIator) to also be active. Logic seems to disable the “external effects” (like Reaper here running via Jack, like an outboard effect in a traditional setting) when bouncing.

* Something good: Pitch controlled reverb send works quite well musically, and is something one would not be able to do without the crossadaptive modulation techniques. Well, it is actually just adaptive here (vocals controlling vocals, not vocals controlling something else).

* Notable: do not try to fix (old) problems, but try to be creative and find new applications/routings/mappings. For example the initial ideas from Gary was related to common problems in a mixing situation, problems that one can already fix (with de-essers or similar)

* Trond: It is unfamiliar in a post production setting to hear the room size change, as one is used to static effects in the mix.

* It would be convenient if we could modulate the filtering of a control signal depending on analyzed features too. For example changing the rise time for pitch depending on amplitude.

* It would also be convenient to have the filter times as sync’ed values (e.g. 16 th ) relative to the master tempo

FIX:

– Add multiband rms analysis.

– check roundtrip latency of the analyzer-modulator, so the time it takes from an audio signal is sent until the modulator signal comes back.

– add modulation targets (e.g. rise time). This most probably just works, but we need to open the midi feed back into Reaper.

– add sync to the filter times. Cabbage reads bpm from host, so this should also be relatively straightforward.

Mixing example, simplified interaction demo

When working further with some of the examples produced in an earlier session , I wanted to see if I could demonstrate the influence of one instrument’s influence of the the other instruments sound more clearly. Here’ I’ve made an example where the guitar controls the effects processing of the vocal. For simplicity, I’ve looped a small segment of the vocal take, to create a track the is relatively static so the changes in the effect processing should be easy to spot. For the same reason, the vocal does not control anything on the guitar in this example.

The reaper session for the following examples can be found here .


Example track: The guitar track is split into two control signals, one EQ’ed to contain only low frequencies, the other with only high frequencies. The control signals are then gated, and used as sidechain control signals for two different effects tracks processing the vocal signal. The vocal signal is just a short loop of quite static content, to make it easier to identify the changes in the effects processing.


Example track: As above, but here the original vocal track is used as input to the effects, giving a more dynamic and flexible musical expression.

Introductory session NTNU, Trond/Øyvind

Date: 3 May 2016

Location: NTNU Mustek

Participants: Trond Engum, Øyvind Brandtsegg

Session objective and focus:

Test ourselves as musicians in cross adaptive setting. MEaning, test how we react to being in the role of the processed

Test out different mappings, different effects. Try the creative/multiband sidechain gating, and other means of crossadaptivity within an off-the-shelf regime.

Reflect on this situation.

Takes:


Take 1: first test. Two-way control.


Take 2: Voc control Guitar (only). Pitch to Reverb Hifreq decay (high pitch means open hf reverb tail), Amp to Reverb Time (amp ducking reverb time, high amp is short reverb)


Take 3: Guitar controls Voc. Pitch to Delay feedback (high pitch is long feedback). Trans.dens to delay time (low density means long delay time)


Take 4: Two-way control. This is more music! Same mapping as for track 2 and 3. Minor adjustments to noise floor etc during take.


Take 5: Multiband sidechain gating. Vocal lo freq opens gate for pitchdown effect. Vocal high freq opens gate for pitchup effect on guitar. Several takes …


Take 6: As track 5, but switch roles. Guitar controlling gate for vocals


Take 7: (Several) attempts at refining the setup form track 6 for cleaner gate trigging. Take 7c seems reasonably good. Using electric guitar allows cleaner frequency separation. And we use an extra sidechain from the low-band trigger track to duck the high-band trigger track (avoiding strong transients in the bass registe to open the hi-freq gate). Again, one thing that is hard as a performer is when one wish for a specific sound, and the modulating musician plays something to the contrary, there is a strong tension (for good and bad).

Comments:

* There is two wildly competing modes of attention:

  • control an effect parameter with one’s audio signal
  • respond musically to the situation

… This is a major issue !!

* Attention grabber (also difficult): to remember what I control on the other sound and what the other performer controls on my sound. Intellectual focus. It is also difficult (as of yet) to hear and listen to the sound and understand musically how the other one affects my sound. Somewhat easier to understand how I affect the sound of the other.

* Introductory exercises: one-way adaptive control. One being processed, the other one controlling.

* When I merely control the parameters of the other, I might feel a bit left out of the situation, not participating musically. My playing infuence the collective sound, but what I actually play does not make sense.

* When controlling, and having a firm and good monitoring of the processed signal, the situation is more open for participation and emotional engagement.

* We should test playing with headphones for even more controlled monitoring, and more presence to the processed signal.

* Using traditional effects (reverb time, delay time etc) forces the musical expression into traditional modes. Maybe trying more crazy effects will open up for more expressive use of the modulations. Simple mappings provide more intentional control, but perhaps complex mappings can provide a frebag-energy-influenced expression.

* Take 4. Two way control now approach more musical interplay. Easier to wait, give room, listen to the (long) effect  tail. Wait. Listen.  Intentional control possible, but also interspersed with chaotic “let if flow” approach. Changing between control and non-control is musically effective. When going out of the traditional tonal type of playing we attain more (effective) timbral expressive control.

* Relationship between feature and control signal can effectively be reversed (reversed polarity). Changing the polarity of modulation distinctively changes the mode of musical interplay. E.g. low transient density means long delay time, or low density means  short delay time).

* Multiband sidechain gating works well in traditional musical application. It seems also reasonably easy to control for the performer, but needs considerable signal preprocessing to isolate energy in the desired frequency band.

* Multiband separation (for clean gating) is difficult, because transients generally have energy in all frequency bands. Ideally we would like to separate high notes form low notes, but high note transients has considerable low frequency energy in many instruments. Low notes also have considerable energy in the higher partials. We experimented with broadband EQs, medium Q, and also very narrow Qs centered on specific notes (e.g. trying to separate out a low E on a guitar). Acoustic guitar with contact mike was particularly difficult, trying now with electric guitar, seems a bit easier.

* The effects applied in this session is generally quite simple, not initiating musical incentives as such, if they were static that is. With cross-adaptive control they get a higher dgree of plasticity, and the energy flow in the interplay creates more interesting behavior. … in our current opinion.

Introductory session, NTNU, Bernt/Øyvind

Date: 26 April 2016

Location: NTNU Mustek

Participants: Bernt Isak Wærstad, Øyvind Brandtsegg

Session objective and focus:

  1. Test ourselves as musicians in cross adaptive setting. We have usually been the processing musicians, now we should test ourselves as the victims of automated processing.
  2. Make a simple test session for Live / M4L, for Bernt’s setup

Takes:

  • 4 introductory and testing takes.Effects and mapping:Vocal: Reverb. Guitar pitch controls reverb decay time. High pitch= long reverbGuitar: Delay. Vocal transient density controls delay feedback. Low density= max feedback

    Take 5: Increase sensitivity. Also control reverb mix and delay time.

    Comments:

    * take 5 is the take where something interesting starts to happen

    * It seems like the control of both feedback and delay time (xfading to avoid pitch glide on delay time change) makes a musically more diverse ground for interplay.  Also (guitar) control of reverb mix (actually send level) works similarly, to increase the dimensionality of interaction.

    * The analysis is not flaw free, so some unintended glitches in parameter changes occur

    * We would like to have analysis for tone/noise (flux or flatness or similar), and also more stable pitch analysis and more robust transient density.

Westerdal session April 2016

Session at Westerdal ACT, OSLO

Participants: Ylva Øyen Brandtsegg, Øyvind Brandtsegg

Objective: Studio use of cross_shimmer effect

Takes


Take 1: Cross_shimmer: Vocals as spectral input, Drumset as exciter


Take2: As above, another take on the same musical goal

Comments:

* Feedback not an issue in the studio setting, so the effect can be fully used as intended.

* The spectral input is taken from Øyvind’s vocal

* The musical effect works as highlighting and prolongation of features/pitches from the vocal input.

* The use of drum set as the exciter lends a more independent feel to the rhythmic action, but perhaps this evaluation is because it is not Øyvind who is playing the exciter signal (as compared with the Tape to Zero session)

*Subjective evaluation: This works quite well

* We would like to optimize the effect so that the analysis stage spends less CPU. It is  currently more CPU heavy than Hadron (1.5x)