Session 19. October 2016

Location: Kjelleren, Olavskvartalet, NTNU, Trondheim

Participants: Maja S. K. Ratkje, Siv Øyunn Kjenstad,  Øyvind Brandtsegg, Trond Engum, Andreas Bergsland, Solveig Bøe, Sigurd Saue, Thomas Henriksen

Session objective and focus

Although the trio BRAG RUG has experimented with crossadaptive techniques in rehearsals and concerts during the last year, this was the first dedicated research session on the issues specific to these techniques. The musicians were familiar with the interaction model of live processing, the electronic modifications of the instrumental sound in real time is familiar ground. Notably, the basic concepts of feature extraction and the use of these features as modulators for the effects processing was also familiar ground for the musicians before the session. Still, we opted to start with very basic configurations of features and modulators as a means of warming up. The warm-up can be valid both for the performative attention, the listening mode, and also a way of verifying that the technical setup works as expected.
The session objective and focus is simply put to explore further the crossadaptive interactions in practice. As of now, the technical solutions work well enough to start making music, and the issues that arise during musical practice are many-faceted and we have opted not to put any specific limitations or preconceived directinos on how these explorations should be made.

As preparation for the session we (Oeyvind) had prepared a set of possible configurations (sets of features, mapped to sets of effects controllers). As the musicians was familiar with the concepts and techniques, we expected that a larger set of possible situations to be explored would be necessary. As it turned out, we spent more time investigating the simpler situations, so we did not get to test all of the prepared interaction situations. The extra time spent on the simpler situations was not due to any unforeseen problems, rather that there were interesting issues to explore even in the simpler situations, and that it seemed highly valid to explore those more fully rather than thinly testing a large set of situations. With this in mind, we could easily have continued our explorations for several more days.
The musician’s previous experience with said techniques provided an effective base for communication around the ideas, issues and problems. The musicians’ ideas were clearly expressed, and easily translated into modifications in the mapping. With regards to the current status of the technical tools, we were quickly able to do modifications to the mappings based on ad hoc suggestions during the sessions. For example changing the types of features (by having a plethora of usable features readily extracted), inverting, scaling, combining different features, gating etc. We did not get around to testing the new modulator mapping of “absolute difference” at all, so we note that as a future experiment. One thing Oeyvind notes as still being difficult is changing the modulation destination. This is presently done by changing midi channels and controller numbers, with no reference to what these controller numbers are mapped to.  This also partly relates to the Reaper DAW lacking a single place to look up active modulators. It seems there are no known technologies for our MIDIator to poll the DAW for the names of the modulator destinations (the effect parameter ultimately being controlled). Our MIDIator has a “notes” field that allow us to write any kind of notesassumed to be used for jotting down destination parameter names. One obvious shortcoming of this is that it relies solely on manual (human) updates. If one changes the controller number without updating the notes, there will be a discrepancy, perhaps even more confusing than not having any notes at all.

Takes


  1. [numbered as take 1] Amplitude (rms_dB) on voice controls reverb time (size) on drums. Amplitude (rms_dB) on drums controls delay feedback on vocals. In both cases, higher amplitude means more (longer, bigger) effect.

Initial comments: When the effect is big on high amplitude, the action is very parallel. The energy flowing in only one direction (intensity-wise).  In this particular take, the effects modulation was a bit on/off, not responding to the playing in a nuanced manner. Perhaps the threshold for the reverb was also too low, so it responded too much (even to low amplitude playing). It was a bit uncleer for the performers how much control they had over the modulations.
The use of two time based effects at the same time tends to melt things together – working against the same direction/dimension (when is this wanted unwanted in a musical situation?).

Reflection when listening to take 1 after the session: Although we now have experienced two sessions where the musicians preferred controlling the effects in an inverted manner (not like here in take 1, but like we do later in this session, f.ex. take 3), when listening to the take it feels like a natural extension to the performance sometimes to simply have more effect when the musicians “lean into it” and play loud. Perhaps we could devise a mapping strategy where the fast response of the system is to reduce the effect (e.g. the reverb time), then if the energy (f.ex. the amplitude) remains high over an extended duration (f.ex. longer than 30 seconds), then the effect will increase again. This kind of mapping strategy suggests that we could create quite intricate interaction schemes just with some simple feature extraction (like amplitude envelope), and that this could be used to create intuitive and elaborate musical forms (“form” here defined as the evolution of effects mapping automation over time.).


  1. [numbered as take 1.2] Same mappings as in take 1, with some fine tuning: More direct sound on drums; Less reverb on drums; Shorter release on drums’s amplitude envelope (vocal delay feedback stops sooner when drums goes from loud to soft). More nuanced disposition of vocal amplitude tracking to reverb size.
  2. [numbered as take 1.3] Inverted mapping, using same features and same effects parameters as in take 1 and 2. Here, louder drums means less feedback of the vocal delay; Similarly, louder vocals gives shorter reverb time on the drums.

There was a consensus amongst all people involved in the session that this inverted mapping is “more interesting”. It also leads to the possibility for some nice timbral gestures; When the sound level goes from loud to quiet, the room opens up and prolongs the small quiet sounds. Even though this can hardly be described as a “natural” situation (in that it is not something we can find in nature, acoustically), it provides a musical tension of opposing forces, and also a musical potential for exploring the spatiality in relation to dynamics.


  1. [numbered as take 2] Similar to the mapping for take 3, but with a longer release time, meaning the effect will take longer to come back after having been reduced.

The interaction between the musicians, and their intentional playing of the effects control is tangible in this take. One can also hear the drummers’ shouts of approval here and there (0:50, 1:38).


  1. [numbered as take 3] Changing the features extracted to control the same effects . Still controlling reverb time (size) and delay feedback. Using transient density on the drums to control delay feedback for the vocals, and pitch of the vocals to control reverb size for the drums. Faster drum playing means more delay feedback on vocals. Higher vocals pitch means longer reverberation on the drums.

Maja reacted quite strongly to this mapping, because pitch is a very prominent and defining parameter of the musical statement. Linking it to control some completely other parameter brings something very unorthodox into the musical dialogue. She was very enthusiastic about this opportunity, being challenged to exploit this uncommon link. It is not natural or intuitive at all, thus being hard work for the performer, as one’s intuitive musical response and expression is contradicted directly. She would make an analogy to a “score”, like written scores for experimental music. In the project we have earlier noted that the design of interaction mappings (how features are mapped to effects) can be seen as compositions. It seems we think in the same way about this, even though the terminology is slightly different. For Maja, a composition involves form (over time), while a score to a larger extent can describe a situation. Oeyvind on the other hand uses “composition” to describe any imposed guideline put on the performer, anything that instructs the performer in any way. Regardless of the specifics of terminology, we note that the effect of the pitch-to-reverbsize mapping was effective in creating a radical or unorthodox interaction pattern.
One could crave for also testing out this kind of mapping in the inverted manner, letting transient density on the drums give less feedback, and lower pitches give larger reverbs.  At this moment in the session, we decided to spend the last hour on trying something entirely different, just to have also touched upon other kinds of effects.


  1. [numbered as take 4] Convolution and resonator effects. We tested a new implementation of the live convolver. Here, the impulse response for the convolution is live sampled, and one can start convolving before the IR recording is complete. An earlier implementation was the subject of another blog post, the current implementation being less prone to clicks during recording and considerably less CPU intensive. The amplitude of the vocals is used to control a gate (Schmitt trigger) that in turn controls the live sampling. When the vocal signal is stronger than the selected threshold, recording starts, and recording continues as long as the vocal signal is loud. The signal to be recorded is the sound of the drums. The vocal signal is used as the convolution input, so any utterance on the vocals gets convolved with the recent recording of the drums.
    In this manner, the timbre of the drums is triggered (excited, energized) by the actions of the vocal. Similarly, we have a resonator capturing the timbral character of the vocal and this resonator is triggered/excited/energized by the sound of the drums. Transient density of the drums is run through a gate (Schmitt trigger), similar to the one used on the vocal amplitude. Fast playing on the drums trigger recording of the timbral resonances of the voice. The resonator sampling (one could also say resonator tuning) is a one-shot process triggered each time the drums has been playing slow and then changes to faster playing.
    Summing up the mapping: Vocal amplitude triggers IR sampling (using drums as source, applying the effect to the vocals). Drums transient density triggers vocal resonance tuning (applying the effect on the drums).

The musicians liked very well playing with the resonator effect on the drums. It gave a new dimension to taking features from the other instrument and applying it to one’s own. In some way it resembles the “unmoderated” (non-cross-adaptive) improvising together, as in that setting these (these two specific) musicians also commonly borrows material from each other. The convolution process on the other hand seemed a bit more static, like one-shot playback of a sample (although it is not, it is also somewhat similar). The convolution process can also be viewed as getting delay patterns from the drum playing. Somehow, the exact patterns does not differ so much when fed a dense signal, so changes in delay patterns are not so dramatic. This can be exploited with intentionally playing in such a way as to reveal the delay patterns. Still, that is a quite specific and narrow musical situation. Other things one might expect to get from IR sampling is capturing “the full sound” of the other instrument. Perhaps the drums are not that dramatically different/dynamic  to release this potential, or perhaps also the drum playing could be adapted to the IR sampling situation, putting more emphasis on different sonic textures (wooden sounds, boomy sounds, metallic sounds, click-y sounds).  In any case, for our experiments in this session, we did not seem to release the full expected potential of the convolution technique.

Comments:

Most of the specific comments done for each take above, but some general remarks for the session:

Status of tools:
We seem to have more options (in the tools and toys) than we have time to explore,  which is a good indication that we can perhaps relax the technical development and focus more on practical exploration for a while.  It also indicates that we could (perhaps should) book practical sessions over several consecutive days, to allow more in-depth exploration.

Making music:
It also seems that we are well on the way to actually making music with these techniques, even though there is still quite a bit of resistance in the techniques. What is good is that we have found some situations that work well musically (for example when amplitude inversely affecting room size). Then we can expand and contrast these with new interaction situations where we experience more resistance (for example like we see when using pitch as a controller). We also see that even the successful situations appears static after some time, so the urge to manually control and shape the result (like in live processing) is clearly apparent. We can easily see that this is what we will probably do when playing with these techniques outside the research situation.

Source separation:
During this studio situation, we put the vocals and drums in separate rooms to avoid audio bleed between the instruments. This made for a clean and controlled input signal to the feature extractor, and we were able to better determine the qualitu of the feature extraction. Earlier sessions done with both instruments in the same room gave much more crosstalk/bleed and we would sometimes be unsure if the feature extractor worked correctly.  As of now it seems to work reasonably well. Two days after this session, we also demonstrated the techniques on stage with amplification via P.A.  This proved (as expected) very difficult due to a high degree of bleed between the different signals. We should consider making a crosstalk-reduction processor. This might be implemented as a simple subtraction of one time-delayed spectrum from the other, introducing additional latency to the audio analysis. Still, the effects processing for audio output need not be affected by the additional latency, only the analysis. Since many of the things we want to control change relatively slowly, the additional latency might be bearable.

Video digest

Brief system overview and evaluation

As preparation for upcoming discussions about tecnical needs in the project, it seems appropriate to briefly describe the current status of the software developed so far.

analyzer_2016_10
The Analyzer

The plugins

The two main plugins developed is the Analyzer  and the MIDIator. The Analyzer extracts perceptual features from a live audio signal and transmit signals representing these features over a network protocol (OSC) to the MIDIator. The job of the MIDIator is to combine different analyzed features (scaling, shaping, mixing, gating) into a controller signal that we will ultimately use to control some effect parameter. The MIDIator can run on a different track in the same DAW, it can run on another DAW, or on another computer entirely.

Strong points

The feature extraction generally works reasonably well for the signals it has been tested on. Since a limited set of signals is readily available during implementation, some overfitting to these signals can be expected. Still, a large set of features is extracted, and these have been selected and tweaked for use as intentional musical controllers . This can sometimes differ from the more pure mathematical and analytical descriptions of a signal. The quality of of our feature extraction can best be measured in how well a musician can utilize it to intentionallly control the output. No quantitative mesurement of that sort have been done so far. The MIDIator contains a selection of methods to shape and filter the signals, and to combine them in different ways. Until recently, the only way to combine signals (features) was by adding them together. As of the past two weeks, mix methods for absolute difference, gating, and sample/hold has been added.

midiator_modules_2016_10
MIDIator modules

Weak points

The signal chain transmission from Analyzer to MIDIator, and then again from the MIDIator to the control signal destination each incurs at least one sample block latency. The size of a sample block can vary from system to system, but regardless of the size used our system will have 3 times this latency before an effect parameter value changes in response to a change in the audio input. For many types of parameter changes this is not critical, still it is a notable limitation of the system.

The signal transmission latency points at another general problem, interfacing between technologies. Each time we transfer signals from one paradigm to another we have the potential for degraded performance, less stability and/or added latency. In our system the interface from the DAW to our plugins will incur a sample block of latency, the interface between Csound and Python can sometimes incure performance penalties if large chunks of data needs to be transmitted from one to the other. Likewise, the communication between the Analyzer and MIDIator is such an interface.

Some (many) of the feature extraction methods create somewhat noisy signals. With noise, we mean here that the analyzer output can intermittently deviate from the value we perceptually assume to be “correct”. We can also look at this deviation statistically, if we feed it relatively (perceptually) consistent signals and look at how stable the output of each feature extraction method is. Many of the features show activity generally in the right register, and a statistical average of the output corresponds with general perceptual features. While the average values are good, we will oftentimes see spurious values with relatively high deviation from the general trend. From this, we can assume that the feature extraction model generally works, but intermittently fails. Sometimes, filtering is used as an inherent part of the analysis method, and in all cases, the MIDIator has a moving exponential average filter with separate rise and fall times. Filtering can be used to cover up the problem, but better analysis methods would give us more precise and faster response from the system.

Audio separation between instruments can sometimes be poor. In the studio, we can isolate each musician, but if we want them to be able to play together naturally in the same room, a significant bleed from one instrument to the other will occur. For live performance this situation is obviously even worse. The bleed give rise to two kinds of problems: Signal analysis is disturbed by the signal bleed, and signal processing is cluttered. For the analysis, it does not matter if we had perfect analysis methods if the signal to be analyzed is a messy combination of opposing perceptual dimensions. For the effect processing, controlling an effect parameter for one instrument leads to a change in the processing of the other instrument, just because the other instruments’ sound bleed into the first instrument’s microphones

Useful parameters (features extracted)

In many of the sessions up until now, the most used features has been amplitude (rms) and transient density. One reson for this is probably that they are concptually easy to understand, another is that their output is relatively stable and predictable in relation to the perceptual quality of the sound analyzed. Here are some suggestions of other parameters that expectedly can be utilized effectively in the current implementation:

  • envelope crest ( env_crest ): the peakyness of the amplitude envelope, for sustained sounds this will be low, for percussive onsets with silence between evens it will be high
  • envelope dynamic range ( env_dyn ): goes low for signals operating at a stable dynamic level, high for signals with a high degree of dynamic variation.
  • pitch: well known
  • spectral crest ( s_crest) : goes low for tonal sounds, medium for pressed tones, high for noisy sounds.
  • spectral flux ( s_flux ): goes high for noisy sounds, low for tonal sounds
  • mfccdiff: measure of tension or pressedness, described here

There is also another group of extracted features that is potentially useful but still has some stability issues

  • rhythmic consonance ( rhythm_cons ) and rhythmic irregularity ( rhythm_irreg ): described here
  • rhythm autocorr crest ( ra_crest ) and rhythm autocorr flux ( ra_flux ): described here

The rest of the extracted features can be considered more experimental, in some cases they might yield effective controllers, especially when combined with other features in reasonable proportions

Skype on philosophical implications

Skype with Solveig Bøe, Simon Emmerson and Øyvind Brandtsegg.

Our intention for the conversations was to start sketching out some of the philosophical implications of our project. Partly as a means to understand what we are doing and what it actually means, and partly as a means of informing our further work, choosing appropriate directions and approaches. Before we got to these overarching issues, we also discussed some implications of the very here and now of the project. We come back to the overarching issues in the latter part of this post.

Rhythm

Initially, we commented on rhythm analysis not based on the assumption of an underlying constant meter or pulse. Work of Ken Fields was mentioned, on internet performance and the nature of time. Also John Young’s writing on form and phrase length (in Trevor Wishart’s music); Ambrose Seddon’s chapter on time structures in Emmerson/Landy’s recent book “Expanding the Horizon of Electroacoustic Music Analysis”. In live performance there is also George Lewis’ rhythm analysis (or event analysis) in his interactive pieces (notably Voyager ). See https://www.amherst.edu/media/view/58538/original/Lewis%2B-%2BToo%2BMa

Score as guide

In the context of Oeyvind being at UCSD, where one of their very strong fields of competence is contemporary music performance, he thought loosely about a possible parallel between a composed score and the cross-adaptive performance setting. One could view the situation designed for/around the performers in a cross-adaptive setting as a composition , in terms of it setting a direction for the interplay and thus also posing certain limits (or obstacles/challenges) to what the performer can and cannot do. On the possible analogy between the traditional score and our crossadaptive performance scenario, one could flag the objection that a classical performer not so much just follow the do’s and don’ts described  in a score but rather uses the score as a means to engage in the composer’s intention. This may or may not apply to the musics created in our project, as we strive to not limit ourselves to specific musical styles, still leaning solidly against the quite freely improvised expression in a somewhat electroacoustic timbral environment. In some of our earlier studio sessions , we noted that there is two competing modes of attention:

  • To control an effect parameter with one’s audio signal, or
  • to respond musically to the situation

… meaning that what the performer chooses to play would traditionally be based on an intended musical statement to contribute to the current musical situation, whereas now she might rather play something that (via its analyzed features) will modulate the sound of another instrument in the ensemble. This new something, being a controller signal rather than a self-contained musical statement, might actually express something of its own in contradiction to the intended (and assumedly achieved) effect it has on changing the sound of another instrument. Now on top of this consider that some performers (Simon was quoting the as yet unpublished work of postgraduate and professional bass clarinet performer Marij van Gorkom) experience the use of electronics as perhaps disturbing the model of engagement with the composer’s intention through the written score. This being something that one needs to adjust to and accommodate on top of the other layers of mediation. Then we might consider our crossadaptive experiments as containing the span of pain to love in one system .

…and then to philosophy

A possible perspective suggested by Oeyvind was a line connecting Kant-Heidegger-Stiegler, possibly starting offside but nevertheless somewhere to start. The connection to Kant being his notion of the noumenal world (where the things in themselves are) and the phenomenal world (which is where we can experience and to some extent manipulate the phenomena). Further, to Heidegger with his thoughts on the essence of technology. Rather than viewing technology as a means to an end (instrumental), or as a human activity (anthropological), he refers to a bringing-forth or a bringing out of concealment into unconcealment. Heidegger then assumes technology as a threat to this bringing-forth (of truth), bypassing or denying human active participation (with the real events, i.e. the revealing of truth). Again, he suggests that art can be a way of navigating this paradox and actively sidestepping the potential threat of technology. Then onto Stiegler with his view of technics as organized inorganic matter, somewhat with an imminent drive or dynamic of its own.  This possibly constituting an impediment to socialization, individuation and to intersubjectivization.  In hindsight (after today’s discussion), perhaps a rather gray, or bleak approach to the issues of technology in the arts. In the context of our project, it was intended to portray the difficulty of controlling something with something else , the reaching into another dimension while acting from the present one.

Solveig countered with a more phenomenological approach, bringing Merleau-Ponty’s perspectives of technology as an extension of the body, of communication and communion, of learning and experiencing together , acting on a level below language. On this level the perceptive and the corporal – included extensions that are incorporated into the body schemes, are intertwined and form a unity. Now this seems like a path of further investigation.  One could easily say that we are learning to play a new instrument, or learning to navigate a new musical environment, so our current endeavors must be understood as baby steps in this process (as also reflected on earlier ), and we are this intertwined unity taking form. In this context, also Merleau-Ponty’s ideas about  the learning of language as something that takes place by learning a play of roles, where understanding how to change between roles is an essential part of the learning process, comes into view as appropriate.

Simon also reported on another research student project: cellist Audrey Riley was a long-time member of the Merce Cunningham ensemble and is engaged in examining a (post-Cage) view of performance – including ideas of ‘clearing the mind’ of thought and language – as they might ‘get in the way’.  This relates to the idea of ‘communion’ where there is a sense of wholeness and completeness.  This also leads us to Wittgenstein’s ideas that both in art and philosophy there is something that is unutterable, that may just be demonstrated.

How does this affect what we do?

We concluded our meeting with a short discussion on how the philosophical aspects can be set into play to aid our further work in the project. Is it just high-and-wide ramblings or do we actually make use of it? The setting up of some dimensions of tension, or oppositions (that may not be oppositions but rather different perspectives on the same thing sometimes) may be of help in practically approaching the problems we face. Like the assumed opposition of communion – disturbance, does the crossadaptive interventions into the communication between performers act as a disturbance, or can it be a way of enabling communion? Can we increase the feeling of connectivity? We can also use these lines of thought to describe what happened (in an improvisation), or try to convey the implications of the project as a whole.  Most importantly however, is probably the growth of a vocabulary for reflection through the probing of our own work in the light of the philosophic implications.

Documentation as debugging

This may come as no surprise to some readers, but I thought it Documentation_pencil appropriate to note anyway. During the blog writing about the rhythmic analysis – part 1 , I noticed I would tighten up the definition of terms, and also the actual implementation of the algorithm significantly. I would start writing something, seeing what I had just written and thinking “this just does not make sense” or “this implementation must be off, or just plainly wrong”.  Then, to be able to write sensibly about the ideas, I went back and tidied up my own mess. In the process, making a much more reliable method for analysing the features I wanted to extract. What was surprising was not that this happened but the degree to which it happened.  In the context of artistic research and the reflection embedded in that process, similar events may occur.

Mixing with Gary

During our week in London we had some sessions with Gary Bromham, first at the Academy of Contemporary Music in Guildford on the June 7th , then at QMUL later in the week. We wanted to experiment with cross-adpative techniques in a traditional mixing session. Using our tools/plugins within a Logic session to work similar to the traditional sidechaining, but with the expanded palette of analysis and modulator mappings enabled by our tools developed in the project. Initially we tried to set this up with Logic as the DAW. It kind of works, but seems utterly unreliable. Logic would not respond to learned controller mappings after we close the session and reopen it. It does receive the MIDI controller signal (and can re-learn) but in all cases refuse to respond to the received automation control. In the end we abandoned Logic altogether and went for our safe always-does-the-job Reaper.

As the test session for our experiments we used Sheryl Crow “Soak up”, using stems for the backing tracks and the vocals.


Example 1: Vocal pitch to reverb send and reverb decay time.


Example 2: Vocal pitch as above. Adding vocal flux to hi cut frequency for the rest of the band. Rhythmic analysis (transient density) of the backing track controls a peaking EQ sweep on the vocals, creating a sweeping effect somewhat like a phaser. This is all somewhat odd all together, but useful as an controlled experiment in polyphonic crossadaptive modulation. The

* First thing Gary ask is to process one track according to the energy in a specific frequency band in another. For example “if I remove 150Hz on the bass drum, I want it to be added to the bass guitar”.  Now, it is not so easy to analyze what is missing, but easier to analyze what is there. So we thought of another thing to try; Sibliants (e.g. S’es) on the Vocals can be problematic when sent to reverb or delay effects. Since we don’t have a multiband envelope follower (yet), we tried to analyze for spectral flux or crest, then use that control signal to duck the reverb send for the vocals.

* We had some latency problems, relating to pitch tracking of vocals, the modulator signal arriving a bit late to precisely control the reverb size for the vocals. The tracking is ok, but the effect responds *after* the high pitch. This was solved by delaying the vocal *after* it was sent to the analyzer, then also delaying the direct vocal signal and the rest of the mix accordingly.

* Trond idea for later: Use vocal amp to control bitcrush mix on drums (and other programmed tracks)

* Trond idea for later: Use vocal transient density to control delay setting (delay time, … or delay mix)

* Bouncing the mix: Bouncing does not work, as we need the external modulation processing (analyzer and MIDIator) to also be active. Logic seems to disable the “external effects” (like Reaper here running via Jack, like an outboard effect in a traditional setting) when bouncing.

* Something good: Pitch controlled reverb send works quite well musically, and is something one would not be able to do without the crossadaptive modulation techniques. Well, it is actually just adaptive here (vocals controlling vocals, not vocals controlling something else).

* Notable: do not try to fix (old) problems, but try to be creative and find new applications/routings/mappings. For example the initial ideas from Gary was related to common problems in a mixing situation, problems that one can already fix (with de-essers or similar)

* Trond: It is unfamiliar in a post production setting to hear the room size change, as one is used to static effects in the mix.

* It would be convenient if we could modulate the filtering of a control signal depending on analyzed features too. For example changing the rise time for pitch depending on amplitude.

* It would also be convenient to have the filter times as sync’ed values (e.g. 16 th ) relative to the master tempo

FIX:

– Add multiband rms analysis.

– check roundtrip latency of the analyzer-modulator, so the time it takes from an audio signal is sent until the modulator signal comes back.

– add modulation targets (e.g. rise time). This most probably just works, but we need to open the midi feed back into Reaper.

– add sync to the filter times. Cabbage reads bpm from host, so this should also be relatively straightforward.