Cross adaptive mixing in a standard DAW

To enable the use of these techniques in a common mixing situation, we’ve made some example configurations in Reaper. The idea is to extract some feature of the modulator signal (using common tools like EQs and Compressors rather than more advanced analysis tools), and use this signal to affect the processing of another signal.

By using sidechaining we can allow the energy level of one signal to gate/compress/expand another signal. Using EQ and/or multiband compression on the modulator signal, we can extract parts of the spectrum, for example so that the processing will be applied only if there is energy in the deep frequencies of the modulator signal. Admittedly, this method is somewhat limited as compared with a full crossadaptive approach, but it is widely available and as such has a practical value. With some massaging of the modulator signal and creative selection of effects applied to the affected signal, this method still can produce a fairly wide range of cross-adaptive effects.

By using automation techniques available in Reaper (we will investigate how to do this in other hosts too) , we can map the signal energy directly to parameter changes. This allows cross-adaptive processing in the full sense of the term. The technique (as implemented in Reaper) is limited by the kind of features one can extract from the modulator signal (energy level only) and by the control signal mapping being one-to-one , so that it is not possible to mix several modulator sources to control an effects parameter. Still this provides a method to experiment easily with cross-adaptive mixing techniques in a standard DAW with no extra tools required.

Example projects:

Sidechaining used in an unconventional cross-adaptive manner is demonstrated in the CA_sidechain_pitchverb Reaper project.

Signal automation based on modulator energy is demonstrated in the CA_EnvControl Reaper project

Terms:

Modulator signal: The signal being used to affect processing of another signal
Affected signal: The signal being modulated

Signal (modulator) routing considerations

Analysis happens at the source, but we may want to mix modulation signals from several analyzed sources, so where do we do this? In a separate modulation matrix, or at the destination? Mixing at the destination allows us to only activate the modulation sources actively contributing to a particular effect parameter (might be good to simplify like this), but it also clutters the destination parameter with the scaling/mapping/shaping/mixing of the modulators.

For integration into a traditional workflow in a DAW, do we want the analysis signals to be scaled/shaped/mixed at the source or the destination? For us in the project, it might seem natural to have a dedicated routing central (standalone application), or to have the routing at the destination (in the effect, like in the interprocessing toolkit of 2015). However, to allow any commercial effect to be used, this (scaling/mixing/shaping at the destination) is not feasible.  For a sound engineer it might be more intuitive to have the signal massaging at the source (e.g. “I want the vocals to control reverb mix for the drums, so I adjust this at the vocal source”).

Integration with standard control surface protocols (Mackie/Eucon) might allow the setting of an offset (from where deviations/modulations can occur), then use the analysis signals as modulators increasing  or decreasing this nominal value.  With this kind of routing we must take care to reset the nominal value on playback start, so the modulators do not accumulate in an unpredictable manner.

… which leads us back to the concept of a standalone master routing central, where the nominal value can be stored and modulations applied before sending the control signal to the effect. A standalone application might also be natural as the “one stop solution” when these kind of techniques are needed. It will allow the rest of the mix setup to remain uncluttered. The drawback being that nominal value for the automated effect must be set in the standalone, rather than at the effect (where we might normally expect to set it). Perhaps a two-way communication between the standalone and the DAW (i.e. the effects plugin) can allow the setting of the nominal value in the effect itself, then transmit this value to the standalone routing central, do modulation and send the calue back to the DAW/effect? Is there some kind of distinction between the outbound and inbound control message for control surfaces? I.e. that the outbound message is unaffected by the modulations done on the effect parameter? Unlikely…. Assume it would require two different control channels. But… there must be some kind of feedback prevention in these control surface protocols? “Local off” or similar?

Also, regarding the control surface protocol as a possible interface for signal interaction: is it fast enough? What is the latency and bandwidth?

Interaction types

We see several types of interactions already:

  • ornamenting : expands or embroiders the other sound, creating features/events (in time) that was not there before
  • transplanting : transfers its own timbral character to the other sound
  • inhibiting : when one sound plays, the other is damped/muted (via amplitude, eq, spectrum, other), this is also used in traditional sidechainingto make room for a sound in the mix.
  • enhancing : the opposite of inhibiting, e.g. the other sound is only hear when the first one plays.

Some types (e.g. inhibiting and enhancing) will be hard to apply to a live performance due to acoustic sound bleed. It is probable that it will also be hard to apply for any kind of realtime interactive situation with instruments that have a significant acoustic sound level. The other sound source can be isolated, but my own sound will always be heard directly (e.g. for a singer or a wind instrument), so the other musician can not mute me in a way that I will experience in the same way as the audience (or other player) will