Signal (modulator) routing considerations

Analysis happens at the source, but we may want to mix modulation signals from several analyzed sources, so where do we do this? In a separate modulation matrix, or at the destination? Mixing at the destination allows us to only activate the modulation sources actively contributing to a particular effect parameter (might be good to simplify like this), but it also clutters the destination parameter with the scaling/mapping/shaping/mixing of the modulators.

For integration into a traditional workflow in a DAW, do we want the analysis signals to be scaled/shaped/mixed at the source or the destination? For us in the project, it might seem natural to have a dedicated routing central (standalone application), or to have the routing at the destination (in the effect, like in the interprocessing toolkit of 2015). However, to allow any commercial effect to be used, this (scaling/mixing/shaping at the destination) is not feasible.  For a sound engineer it might be more intuitive to have the signal massaging at the source (e.g. “I want the vocals to control reverb mix for the drums, so I adjust this at the vocal source”).

Integration with standard control surface protocols (Mackie/Eucon) might allow the setting of an offset (from where deviations/modulations can occur), then use the analysis signals as modulators increasing  or decreasing this nominal value.  With this kind of routing we must take care to reset the nominal value on playback start, so the modulators do not accumulate in an unpredictable manner.

… which leads us back to the concept of a standalone master routing central, where the nominal value can be stored and modulations applied before sending the control signal to the effect. A standalone application might also be natural as the “one stop solution” when these kind of techniques are needed. It will allow the rest of the mix setup to remain uncluttered. The drawback being that nominal value for the automated effect must be set in the standalone, rather than at the effect (where we might normally expect to set it). Perhaps a two-way communication between the standalone and the DAW (i.e. the effects plugin) can allow the setting of the nominal value in the effect itself, then transmit this value to the standalone routing central, do modulation and send the calue back to the DAW/effect? Is there some kind of distinction between the outbound and inbound control message for control surfaces? I.e. that the outbound message is unaffected by the modulations done on the effect parameter? Unlikely…. Assume it would require two different control channels. But… there must be some kind of feedback prevention in these control surface protocols? “Local off” or similar?

Also, regarding the control surface protocol as a possible interface for signal interaction: is it fast enough? What is the latency and bandwidth?

Interaction types

We see several types of interactions already:

  • ornamenting : expands or embroiders the other sound, creating features/events (in time) that was not there before
  • transplanting : transfers its own timbral character to the other sound
  • inhibiting : when one sound plays, the other is damped/muted (via amplitude, eq, spectrum, other), this is also used in traditional sidechainingto make room for a sound in the mix.
  • enhancing : the opposite of inhibiting, e.g. the other sound is only hear when the first one plays.

Some types (e.g. inhibiting and enhancing) will be hard to apply to a live performance due to acoustic sound bleed. It is probable that it will also be hard to apply for any kind of realtime interactive situation with instruments that have a significant acoustic sound level. The other sound source can be isolated, but my own sound will always be heard directly (e.g. for a singer or a wind instrument), so the other musician can not mute me in a way that I will experience in the same way as the audience (or other player) will