Exploring radically new modes of musical interaction in live performance
Session with Kim Henry Ortveit
February 22, 2018
Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a ...
Session with Michael Duch
February 22, 2018
February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is ...
Session with David Moss in Berlin
February 2, 2018
Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration ...
Session in UCSD Studio A
September 8, 2017
This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all ...
Convolution experiments with Jordan Morton
March 1, 2017
Jordan Morton is a bassist and singer, she regularly performs using both instruments combined. This provides an opportunity to explore how the liveconvolver can work when both the IR and the live input are generated by the same musician. We did a ...
Crossadaptive session NTNU 12. December 2016
December 16, 2016
Participants: Trond Engum (processing musician) Tone Åse (vocals) Carl Haakon Waadeland (drums and percussion) Andreas Bergsland (video) Thomas Henriksen (sound technician) Video digest from session: https://www.youtube.com/watch?v=ktprXKVdqF4&feature=youtu.be Session objective and focus: The main focus in this session was to explore other ...
Evolving Neural Networks for Cross-adaptive Audio Effects
June 27, 2016
I'm Iver Jordal and this is my first blog post here. I have studied music technology for approximately two years and computer science for almost five years. During the last 6 months I've been working on a specialization project which ...
The Analyzer and MIDIator plugins
June 2, 2016
... so with all these DAW examples "where are the plugins" you might ask. Well, the most up-to-date versions will always be available in the code repo at github. BUT, I've also uploaded precompiled versions of the plugins for Windows ...
Simple analyzer-modulator setup for Reaper
June 2, 2016
Following up on the recent Ableton Live set, here's a simple analyzer-modulator project for Reaper. The routing of signals is simpler and more flexible in Reaper, so we do not have the clutter of an extra channel to enable MIDI out, rather we ...
Simple analyzer-modulator setup for Ableton Live
June 2, 2016
I've created a simple Live set to show how to configure the analyzer and MIDIator in Ableton Live. There are some small snags and peculiarities (read on), but basically it runs ok. The analyzer will not let audio through, so we ...
Simple crossadaptive mixing template for Logic
June 1, 2016
Next week we'll go to London for seminars at Queen Mary and De Montfort. We'll also do a mixing session with Gary Bromham, to experiment with the crossadaptive modulation techniques in a postproduction setting. For this purpose I've done a simple session template in ...
Mixing example, simplified interaction demo
May 24, 2016
When working further with some of the examples produced in an earlier session , I wanted to see if I could demonstrate the influence of one instrument's influence of the the other instruments sound more clearly. Here' I've made an example where the ...
Cross adaptive mixing in a standard DAW
May 15, 2016
To enable the use of these techniques in a common mixing situation, we’ve made some example configurations in Reaper. The idea is to extract some feature of the modulator signal (using common tools like EQs and Compressors rather than more ...
Following up on the recent Ableton Live set, here’s a simple
analyzer-modulator project for Reaper.
The routing of signals is simpler and more flexible in Reaper, so we do not have the clutter of an extra channel to
enable MIDI out, rather we can select MIDI hardware output in the routing dialog for the MIDIator track. In Reaper, the MIDIator plugin will start processing all by itself (no need to open its editing window to kick it to life). You need to enable a virtual MIDI device for input (remember to also enable input for control messages) and output. This is done in Reaper preferences / Midi hardware setttings.
The analyzer does not send audio throu, so we still need to use two input tracks; one for the analyzer and one for the actual audio input for processing. The MIDIator is set up to map the spectral flux of Analyzer channel 1 to Midi controller 11, channel 1. The audio input is sent to a Reverb channel, and we’ve mapped the MIDI controller (11 on channel 1) to the Room size control of the reverb.
I’ve created a
simple Live set
to show how to configure the analyzer and MIDIator in Ableton Live. There are some small snags and peculiarities (read on), but basically it runs ok.
The analyzer will not let audio through, so we use two audio input tracks. One track for the actual audio processing (just send to a reverb in our case), and one track for the analyzer. The MIDI processing also requires two tracks in Live, this is because Live assumes that a MIDI plugin will output audio so it disables the MIDI out routing when a plugin is present on a track. This is easily solved by creating a second MIDI track, and select MIDIator as the MIDI input to that track. From this track we can route MIDI out from Live. You will want to set the MIDI out to a virtual MIDI device (e.g. loopMIDI on windows, IAC bus on OSX). Enable midi input from the virtual midi device (you can enable
for a MIDI input. We want to enable
In our example setup, we’ve enabled one modulator on the MIDIator. This is set to receive spectral flux from Analyzer 1, and send this modulator data to midi channel 1, controller 11. We’ve mapped this to Reverb Decay Time on the effect return track. Input sounds with high flux (which means that the sound is probably a bit noisy) will have long reverb. Sounds with low flux (probably a tonal or stable sound) will have short reverb.
On my computer (windows), I need to open the MIDIator editing window to force it to start processing. Take care *not* to close the MIDIator window, as it will somehow stop processing when the window is closed (This only happens in Ableton Live, not sure why). To get rid of the window, just click on something in another track. This will hide the MIDIator window without disabling it.
Next week we’ll go to London for seminars at Queen Mary and De Montfort. We’ll also do a mixing session with Gary Bromham, to experiment with the crossadaptive modulation techniques in a postproduction setting. For this purpose I’ve done a
simple session template in Logic
(as it is a DAWs that Gary uses regularly).
To keep the Logic session as clean as possible, we will do the signal analysis and preparation of the modulation signal in Reaper. This also allows us to use the VST versions of the plugins for analysis and modulation mapping we’ve created earlier. (It should be possible to use these as AU plugins too, but that is somewhat more experimental as of yet). Signal routing between Logic and Reaper is done via Jack. This means that both Logic and Reaper use Jack as the audio driver, while Jack communicates with the audio hardware for physical I/O. The signal to be analyzed (guitar on track 1 in our Logic session) is fed to a separate bus track in Logic, and the output from this track is sent to Reaper via Jack. We analyze the pitch of the signal in Reaper, send the pitch tracking data to the vstMIDIator plugin, and route the MIDI signal from Reaper back to Logic via the IAC bus. In Logic, it is relatively straightforward to map an incoming MIDI signal to control any parameter. You can find this in Logic Preferences/Automation, where there are Learn and Edit buttons towards the bottom of the dialog. To learn a mapping you will touch the desired destination parameter, then send the midi message to be mapped. In our example, we use an EQ on vocals (track 2), and control the frequency of a narrow band in the EQ with the MIDI modulator signal. This way the pitch of the guitar controls the EQ for the vocals.
When working further with some of the examples produced in an
, I wanted to see if I could demonstrate the influence of one instrument’s influence of the the other instruments sound more clearly. Here’ I’ve made an example where the guitar controls the effects processing of the vocal. For simplicity, I’ve looped a small segment of the vocal take, to create a track the is relatively static so the changes in the effect processing should be easy to spot. For the same reason, the vocal does not control anything on the guitar in this example.
The reaper session for the following examples can be found
Example track: The guitar track is split into two control signals, one EQ’ed to contain only low frequencies, the other with only high frequencies. The control signals are then gated, and used as sidechain control signals for two different effects tracks processing the vocal signal. The vocal signal is just a short loop of quite static content, to make it easier to identify the changes in the effects processing.
Example track: As above, but here the original vocal track is used as input to the effects, giving a more dynamic and flexible musical expression.
To enable the use of these techniques in a common mixing situation, we’ve made some example configurations in Reaper. The idea is to extract some feature of the modulator signal (using common tools like EQs and Compressors rather than more advanced analysis tools), and use this signal to affect the processing of another signal.
By using sidechaining we can allow the energy level of one signal to gate/compress/expand another signal. Using EQ and/or multiband compression on the modulator signal, we can extract parts of the spectrum, for example so that the processing will be applied only if there is energy in the deep frequencies of the modulator signal. Admittedly, this method is somewhat limited as compared with a full crossadaptive approach, but it is widely available and as such has a practical value. With some massaging of the modulator signal and creative selection of effects applied to the affected signal, this method still can produce a fairly wide range of cross-adaptive effects.
By using automation techniques available in Reaper (we will investigate how to do this in other hosts too) , we can map the signal energy directly to parameter changes. This allows cross-adaptive processing in the full sense of the term. The technique (as implemented in Reaper) is limited by the kind of features one can extract from the modulator signal (energy level only) and by the control signal mapping being
, so that it is not possible to mix several modulator sources to control an effects parameter. Still this provides a method to experiment easily with cross-adaptive mixing techniques in a standard DAW with no extra tools required.
Sidechaining used in an unconventional cross-adaptive manner is demonstrated in the
Signal automation based on modulator energy is demonstrated in the
Modulator signal: The signal being used to affect processing of another signal
Affected signal: The signal being modulated