Seminar and meetings at Queen Mary University of London

June 9 th and 10 th we visited QMUL, met Joshua Reiss and his eminent colleagues there.  We were very well taken care of and had a pleasant and interesting stay.  June 9 th we had a seminar presenting the project, and discussing related issues with a group of researchers and students. The seminar was recorded on video, to be uploaded on QMUL youtube. The day after we had a meeting with Joshua, going in more detail. We also got to meet several PhD students and got insight into their research.

Seminar discussion

Here’s some issues that were touched upon in the seminar discussion:

* analyze gestures inherent in the signal, e.g. crescendo, use this as trigger, to turn some process on or off, flip a preset etc. We could also analyze for very specific patterns, like a melodic fragment, but probably better to try to find gestures that can be performed in several different ways, so that the musician can have freedom of expression while providing a very clear interface for controlling the processes.

* Analyze features related to specific instruments. Easier to find analysis methods to extract very specific features. Rather than asking for “how can we analyze this to extract something interesting… This is perhaps a lesson for us to be a bit more specific in what we want to extract. This is somewhat opposite to our current exploratory effort in just trying to learn how the currently implemented analysis signals works and what we can get from them.

* Look for deviations from a quantized value. For example pitch deviations within a semitone, and rhythmic deviations from a time grid.

* Semantic spaces. Extract semantic features from the signal, could be timbral descriptors, mood, directions etc. Which semantics? Where to take the terminology from? We should try to develop examples of useful semantics, useful things to extract.

* Semantic descriptors are not necessarily a single point in a multidimensional space, it is more like a blob, and area. Interpolation between these blobs may not be linear in all cases. We don’t have to use all implied dimensions, so we can actually just select features/semantics/descriptors that will give us the possibility of linear interpolation. At least in those situations where we need to interpolate…

* Look at old speech codexes. Open source. Exitation/resonator model, LPC.  This is time domain, so will be really fast/low-latency.

* Cepstral techniques can also be used to separate resonator and exciter. The smoothed cepstrum being the resonator. Take the smoothed cepstrum and subtract it from the full cepstrum to get the excitation.

* The difficulty of control. The challenge to the performer, limiting the musical performance,  inhibiting the natural ways of interaction. This is a recurring issue, and something we might want to take care to handle carefully. This is also really what the project is about: creating *new* ways of interaction.

* Performer will adapt to imperfections of the analysis. Normally, MIR signals are not “aware” that they are being analyzed. They are static and prerecorded. In our case, the performer, being aware of how the analysis method works and what it responds to, can adapt the playing to trigger the analysis method in highly controllable manners. This way the cross-adaptivity is not only technical related to the control of parameters, but adaptive in relation to how the performer shapes her phrases and in turn also what she selects to play.

* Measuring collective features. Features of the mix. Each instrument contributes equally to the modulator signal. One signal can push others down or single them out. Relates to game theory. What is the most favorable behavior over time: suppress others or negotiate and adapt.

Meeting with Joshua

* Josh mentioned a few researchers that we might be interested in. Brech de Man: Intelligen audio switcher. Emannuel Chourdakis: Feature based reverb.  Dave Ronan: Groups, stems, automatic mixing. Vincent Verfaille: effect classification and A-DAFX. Brian Pardo: interfaces for music performance/production, visualization, semantics, machine learning. Pedro Pestana: sound engineer, best practices (phD), automatic mixing. Ryan Stables: SAFE plugins.

* Issues relating to publishing our work and getting an audience for it. As we could in some respect claim to create a new field, creating a community for it might be essential for further use of our research. Increase visibility. Promote also via QMUL press and NTNU info. Among connected fields are Human Computer Interaction, New Instruments for Musical Expression, Audio Engineering Society.

* QMUL has considerable experience in evaluation studies, user experience tests, listening tests etc. Some of this may be beneficial as a perspective on our otherwise experiental approach.

* Collective features (meaning individual signals in relation to each other and to the ensemble mix): Masking (spectral overlap). Onset times in relation to other instruments, lagging. Note durations, percussive/sustained etc.

* We currently use spectral crest, but crest also being useful in the time domain to do rhythmic analysis (rhythmic density, percussiveness, dynamic range). Will work better with loudness matching curve (dB)

* Time domain filterbank faster than FFT. Logarithmically spaced bands

* FFT of the time domain amp envelope

* Separate silence from noise. Automatic gain control. Automatic calibration of noise floor (use peak to average measure to estimate what is background noise and what is actual signal)

* Look for patterns: playing the same note, also pitch classes (octaves), also collectively (between instruments).

* Use log freq spectrum, and amps in dB, then do centroid/skew/flux etc

* How to collaborate with others under QMUL, adapting their plugins. Port the techniques to Csound or re-implement ours in C++? For the prototype and experimentation stage maybe modify their plugins to output just control signals? Describe clearly our framework so their code can be plugged in.

Seminar at De Montfort

Simon_and_Leigh
Simon and Leigh in Leigh’s office at De Montfort

Wednesday June 8 th we visited Simon Emmerson at De Montfort and also met Director Leigh Landy. We were very well taken care of and had a pleasant and interesting stay. One of the main objectives was to do seminar with presentation of the project and discussion among the De Montfort researchers. We found that their musical preference seems to overlap considerably with our own, in the focus on free improvisation and electroacoustic art music. As this is the most obvious and easy context to implement experimental techniques (like the crossadaptive ones) we had taken care to also present examples of use within other genres. This could be interpreted as if we were more interested in traditional applications/genres than the free improvised genres. Now knowing the environment at Leicester better, we could probably have put more emphasis on the free electroacoustic art music applications. But indeed this led to interesting discussions about applicability, for example:

*In metric /rhythmic genres, one could easier analyze and extract musical features related to bar boundaries and rhythmic groupings.

* Interaction itself could also create meter, as the response time (both human and technical), has a rhythm and periodicity that can evolve musically due to the continuous feedback processes built into the way we interact with such a system and each other in the context of such a system..

* Static and deterministic versus random mappings. Several people was interested in more complex and more dynamic controller mappings, expressing interest and curiosity towards playing within a situation where the mapping could quickly and randomly change. References were made to Maja S.K. Ratkje and that her kind of approach would probably make her interested in situations that were more intensely dynamic.  Her ability to respond to the challenges of a quickly changing musical environment (e.g. changes in the mapping) also correlating with an interest to explore this kind of complex situations.  Knowing Maja from our collaborations, I think they may be right, take note to discuss this with her and try to make some challenging mapping situations for her to try out.

* it was discussed whether the crossadaptive methods could be applied to the “dirty electronics” ensemble/course situation, and there was an expressed interest in exploring this. Perhaps it will be crossadaptivity in other ways than what we use directly on our project, as the analysis and feature extraction methods does not necessarily transfer easily to the DIY (DIT – do it together, DIWO – Do it with others) domain. The “Do it with others” approach resonates well with what we generally approach btw.

* The complexity is high even with two performers. How many performers do we envision this to be used with? How large an ensemble? As we have noticed ourselves also, following the actions of two performers somehow creates a multi-voice polyphonic musical flow (2 sources, each source’s influence on the other source and the resulting timbral change resulting thereof, and the response of the other player to these changes). How many layers of polyphony can we effectively hear and distinguish when experiencing the music? (as performers or as audience). References were made to the laminal improvisation techniques of AMM.

* Questions of overall form. How will interactions under a crossadaptive system change the usual formal approach of a large overarching rise and decay form commonly found in “free” improvisation, At first I took the comment to suggest that we also could apply more traditional MIR techniques of analyzing longer segments of sound to extract “direction of energy” and/or other features evolving over longer time spans. This could indeed be interesting, but also poses problems of how the parametric response to long-terms changes should act (i.e. we could accidentally turn up a parameter way too high, and then it would stay high for a long time before the analysis window would enable us to bring it back down). Now, in some ways this would also resemble using extremely long attack and decay times for the low pass filter we already have in place in the MIDIator, creating very slow responses, needing continued excitation over a prolonged period before the modulator value will respond. After the session, I discussed this more with Simon, and he indicated that the large form aspects were probably just as much meant with regards to the perception of the musical form, rather than the filtering and windowing in the analysis process. There are interesting issues of drama and rhetoric posed by bringing these issues in, whether one tackles them on the perception level or the analysis and mapping stage.

* Comments were made that performing successfully on this system would require immense effort in terms of practicing and getting to know the responses and the reactions of the system in such an intimate manner that one could use it effectively for musical expression.  We agree of course.

Project start meeting in Trondheim

Kickoff

Monday June 6th we had a project start meeting with the NTNU based contributors: Andreas Bergsland, myself, Solveig Bøe, Trond Engum, Sigurd Saue, Carl Haakon Waadeland and Tone Åse. This gave us the opportunity to present the current state of affairs and our regular working methods to Solveig. Coming from philosophy, she has not taken part in our earlier work on live processing. As the last few weeks have been relatively rich in development, this also gave us a chance to bring all of the team up to speed. Me and Trond also gave a live demo of a simple crossadaptive setup where vocals control delay time and feedback on the guitar, while the guitar controls reverb size and hi-freq damping for the vocal. We had discussions and questions interspersed within each section of the presentation. Here’s a brief recounting of issues wwe touched upon.

Roles for the musician

The role of the musician in crossadaptive interplay has some extra dimensions when compared to a regular acoustic performance situation. A musician will regularly formulate her own musical expression and relate this to what the other musician is playing. On top of this comes the new mode of response created by live processing, where the instrument’s sound constantly changes due to the performative action of a live processing musician. In the cross-adaptive situation, these changes are directly controlled by the other musicians’ acoustic signal, so the musical response is two-fold: responding to the expression and responding to the change in own sound. As these combine, we may see converging or diverging flow of musical energy between the different incentives and responses at play. Additionally, her own actions will influence changes on the other musician’s sound, so the expressive is also two-fold; creating the (regular) musical statement and also considering how the changes inflicted on the other’s sound will affect both how the other one sounds and how that affects their combined effort. Indeed, yes, this is complex. Perhaps a bit more complex that we had anticipated. The question was raised if we do this only to make things difficult for ourselves. Quite justly. But we were looking for ways to intervene in the regular musical interaction between performers, to create yet unheard ways of playing together. It might appear complex just now because we have not yet figured out the rules and physics of this new situation, and it will hopefully become more intuitive over time. Solveig voiced it like we put the regular modes of perception in parenthesis. For good or for bad, I think she may be correct.

Simple testbeds

It seems wise to initially set up simplified interaction scenarios, like the vocal/reverb guitar/delay example we tried in this meeting. It puts emphasis on exploring the combinatorial parameter modulation space. Even with a simple situation of extracting two features for each sound source, controlling two parameters on each other’s sound, the challenges to the musical interaction is prominent. Controlling two features of one’s own sound, to modulate the other’s processing is reasonably manageable while also concentrating on the musical expression.

Interaction as composition

An interaction scenario can be thought of as a composition. In this context we may define a composition as something that guides the performers to play in a certain way (think of the text pieces from the 60’s for example, setting the general mood or each musician’s role while allowing a fair amount of freedom for the performer as no specific events are notated). As the performers formulate their musical expression to act as controllers just as much as to express an independent musical statement, the interaction mode has some of the same function as a composition has in written music. Namely to determine or to guide what the performers will play. In this setting, the specific performative action is freely improvised, but the interaction mode emphasizes certain kinds of action to such an extent that the improvisation is in reality not really free at all, but guided by the possibilities (the affordance [https://en.wikipedia.org/wiki/Affordance]) of the system. The intervention into the interaction also sheds light on regular musical interaction. We become acutely aware of what we normally do to influence how other musicians play together with us. Then this is changed and we can reflect on both the old (regular, unmodulated) kind of interaction and the new crossadaptive mode.

Feature extraction, performatively relevant features

Extracting musically salient features is a Big Question. What is musically relevant? Carl Haakon suggested that some feature related to the energy could be interesting. Energy, for the performer can be induced into the musical statement in several ways. It could be rhythmic activity, loudness, timbre, and other ways of expressing energetic performance. As such it could be a feature taking input from several mathematical descriptions. It could also be a feature allowing a certain amount of expressive freedom for the performer, as energy can be added by several widely different performative gestures, leaving some sort of independence from having to do very specific actions in order to trigger the control of the destination parameter. Mapping the energy feature to a destination parameter that results in a more rich and energetic sound could lead to musically convergent behavior, and conversely, controlling a parameter that makes the resulting sound more sparse could create musical and interactive tension. In general, it might be a good idea to use such higher level analysis. This simplifies the interaction for the musician, and also creates several alternative routes to inflict a desired change in the sound. The option to create the same effect by several independent routes/means, also provides the opportunity for doing so with different kinds of side effects (like in regular acoustic playing too), e.g. creating energy in this manner or that manner gives very different musical results but in general drives the music in a certain direction.
Machine learning (e.g. via neural networks) could be one way of extracting such higher level features, different performance situations, different distinct expressions of a performer. We could expect some potential issues of recalibration due to external conditions, slight variations in the signal due to different room, miking situation etc. Will we need to re-learn the features for each performance, or could we find robust classification methods that are not so sensitive to variations between instruments and performance situations?

Meta mapping, interpolations between mapping situations

Dynamic mappings, allowing the musicians to change the mapping modes during different sections of the performed piece. If the interaction mode becomes limited or “worn out” after a while of playing, the modulation mappings could be gradually changed. This can be controlled by an external musician or sound engineer, or it can be mapped to yet other layers of modulations. So, a separate feature of the analyzed sound is mapped to a modulator changing the mappings (the preset or the general modulation “situation” or “system state”) of all other modulators, creating a layered meta-mapping-modulator configuration. At this point this is just an option, still too complex for our initial investigation. It brings attention to the modulator mapping used in the Hadron Particle Synthesizer, where a simple X-Y pad is used to interpolate between different states of the instrument. Each state containing modulation routing and mapping in addition to parameter values. The current Hadron implementation allows control over 209 parameters and 54 modulators via a simple interface. This enables a simplified multidimensional control in Hadron. Maybe the cross-adaptive situation can be thought as somehow similar. The instrumental interface of Hadron behaves in highly predictable ways, but it is hardly possible to decode intellectually, one has to interact by intuitive control and listening.

Listening/monitoring

The influence of the direct/unprocessed sound; With acoustic instruments, the direct sound from the instrument will be heard clearly in addition to the processed sound. In our initial experiments, we’ve simplified this by using electric guitar and close miked vocals. We mostly hear the result of the effects processing. Still, the analysis of features is done on the dry signal. This creates a situation where it may be hard to distinguish what features controls which modulations, because the modulation source is not heard clearly as a separate entity in the sound image. It is easy to mix the dry sound higher, but then we hear less of the modulations. It is also possible to allow the modulated sound be the basis of analysis (creating the possibility for even more complex cross-adaptive feedback modulations as the signals can affect each other’s source for analysis). Then this would possibly make it even harder for the musicians to have intentional control over the analyzed features and thus the modulations. So, the current scheme is, if not the final answer, it is a reasonable starting point.

Audience and fellow musicians’ perception of the interplay

How will the audience perceive this? Our current project does not focus on this question, but it is still relevant to visit it briefly. It also relates to expectations, to schooling of the listener. Do we want the audience to know? Does knowledge of the modulation interaction impede the (regular) appreciation of the musical expression? One could argue that a common symphony orchestra concert-goer does not necessarily know the written score, or have analyzed the music, but appreciates it as an aesthetic object on its own terms. The mileage may vary, some listeners know more and are more invested in details and tools of the trade. Still, the music itself does not require knowledge of how it is made to be appreciated. For a schooled listener, and also for ourselves, we can hope to be able to play with expectations with in the crossadaptive technique. Andreas mentions that listening to live crossadaptive processing as we demonstrated it is like listening to an unfamiliar spoken language, trying to extract meaning. There might be some frustration over not understanding how it works. Also, expectations of the fantastic new interaction mode and not hearing it can lead to disappointment. Using it as just another means of playing together, another parameter of musical expression alleviates this somewhat. The listener does not have to know, but will probably get the opportunity for an extra layer of appreciation with understanding of the process. In any case, our current research project does not directly concern the audience’s appreciation of the produced music. We are currently at a very basic stage of exploration and we need to experiment at a much lower level to sketch out how this kind of musics can work before starting to consider how (or even if) it can be appreciated by the audience.

Conversation with Marije

We had a Skype meeting between me (Oeyvind) and Marije Baalman today, here’s some notes from the conversation:

First, we really need to find alternatives to Skype, the flakyness of connection makes it practically unusable. A service that allows recording the conversation would be nice too.

We talked about a number of perspectives on the project, including Marijes role as an external advisor and commentary, references to other projects that may relate to ours in terms of mapping of signals to musical parameters, possible implementation on embedded devices, and issues relating to the signal flow from analysis to mapping to modulator destination.

Marije mentioned Joel Ryan ( * * ) as an interesting artist doing live processing of acoustic instruments. His work seems closely related to our earlier work in T-EMP . Interesting to see and hear his work with Evan Parker . More info on his instruments and mapping setup would be welcome.

We discussed the prospect of working with embedded devices for the crossadaptive project. Marije mentioned the Bela project for the BeagleBboard Black, done at Queen Mary University. There are of course a number of nice tthigns about embedded devices, making self-contained instruments for easy of use for musicians. However, we feel that at this stage, our project is of such an experimental nature that is easier explored on a conventional computer. The technical hurdles of adapting it to an embedded device is easier tackled when the signal flow and processing needs are better defined. In relation to this we also discussed the different processes involved in our signal flow: Analysis, Mapping, Destination.

Our current implementation (based on the 2015 DAFx paper ) has this division in analysis, mapping/routing and destination (the effect processing parameter being controlled). The mapping/routing was basically done at the destination in the first incarnation of the toolkit. However, lately I’ve started reworking it to create a more generic mapping stage , allowing for more freedom in what to control (i.e. controlling any effect processor or DAW parameter). Marije suggested more preprocessing to be done in the analysis stage, to create cleaner and more generically applicable control signals for easier use at the mapping stage. This coincides with our recent experiences in using low level analyses like spectral flux and crest for example; they work well on some instruments (e.g. vocals) as an indicator of balance between tone and noise, however on other instruments (e.g. guitar) they sometimes work well and sometimes less so. Many of the current high level analyses (e.g. in music information retrieval) seems in many respects to be geared towards song recognition, while we need some way of extracting higher level features as continuous signals. The problem being to define and extract what is the musical meaning of a signal, on a frame-by-frame basis. This will most probably be different for different instruments, so an source-specific preprocessing stage seems reasonable.

The Digital Orchestra and IDMIL , and more specifically the libmapper might provide inspiration for our mapping module. We make a note of inspecting it more closely. Likewise, the Live Algorithms for Music project, and also the possibly the Ircam OMAX project. Marije mentioned 3Dmin project at UDK/TU Berlin using techniques of exploration of arbitrary mappings as a method of influencing rather than directly controlling a destination parameter. This reminded me of earlier work by Palle Dahlstedt , and also, come to think of it our own modulation matrix for the Hadron synthesizer.

Later we want Marije to test our toolkit in more depth and comment more specifically on what it lacks and how it can be improved. There are probably some small technical issues in using the toolkit under Linux, but as our main tools are crossplatform (Windows/OSX/Linux), these should be relatively easy to solve.

Workflows, processing methods

As we’ve now done some initial experiment sessions and gotten to know the terrain a little bit better, it seems a good time to sum up the different working methods for cross adaptive processing in a live performance setting. Some of this is based on my previous article for DAFx-15 , but extended due to the practical work we’ve done on the subject during the last months. I’ve also written about this in an earlier blog post here.

Dual input
This type of effect takes two (mono) signals as input, and apply a transformation to one signal according to the characteristics of the other signal. In this cathegory we typically find spectral morphing effects, resonators, convolvers). It is a quite practical effects type with regards to signal routing, as it can easily be inserted into any standard audio mixing signal flow (use the effect on a stereo aux channel, aux send to the effect from two sources, each source panned hard left and right).
We have some proposals/ideas for new effects in this cathegory, including streaming convolution, cross adaptive resonators, effects inspired by the modal reverberation techniques, and variations to common spectral processors adapted to the crossadaptive domain.

Sidechaining
This type of signal interaction can be found in a conventional audio production, most commonly for dynamics processing (e.g. the genre-typical kick-drum-ducking-synth-pads). Multiband variations of sidechaining can be seen in de-essing applications, where the detector signal is filtered so that dynamics processing is only applied whenever there is significant energy in a specific frequency region. The application has traditionally been limited to fixing cosmetic errors (e.g. dampening the too prounounced “s” sounds of a vocal recording with high sibliance). We’ve experimented with using multiband sidechaining with a wider selection of effects types. The technique can be used to attenuate or amplify the signal going into any effects processor, for example boosting the amount of vocal signal going into a pitch shifter in accordance with the amount of energy on the low frequencies of the guitar track. This type of workflow is practical in that it can be done with off-the-shelf commercially available tools, just applying them creatively and bending their intended use a bit. Quite complex routing scenarios can be created by combining different sidechains. As an extension of the previous example, the amount of vocal signals going into the pitch shifter increase with the low energy content of the guitar but only if we do not have significant energy in the high frequency region of the vocal signal. The workflow is limited to adjusting the amplitude of signals going into an effects processor. In all its simplicity, adjusting the amplitude of the input signal is remarkably effective and can create dramatic effects. It can not, however, directly modify the parameters of the processing effect, so the effect will always stay the same.

Envelope follower as modulation source
As a a slight extension of the sidechaining workflow, we can use envelope followers (on the full signal or a band-limited part of it) as modulation sources. This is in fact already done in the sidechaining processors, but the envelope followers are generally hardwired to control a compressor or a gate. In some DAWs (e.g. Reaper and Live ), the output of the envelope follower can be routed more freely and thus used as a modulator for any processing parameter in the signal chain. This can be seen as a middle ground of crossadaptive workflows, combining significant complexity with widely available commrecial tools. The workflow is not available in all DAWs, and indeed it is limited to analyzing the signal energy (in any separate frequency band of the input signal). For complex mappings, it might become cumbersome or unwieldy, as the mapping between control signals and effects parameters is distributed, so it happens a little bit here and a little bit there in our processing chain. Still it is a viable option if one needs to conform to using only commercially available tools.

Analysis and automation
This is the currently preferred method and the one that gives the most flexibility in terms of what features can be extracted by analysis of the source signal, and how the resulting modulator signal can be shaped. The current implementation of the interprocessing toolkit allows any analysis methods available in Csound to be used, and the plugins are interfaced to the DAW as VST plugins. The current modulator signal output is in the form of Midi or OSC, but other automation protocols will be investigated. Extension of Csound’s analysis methods can be done by interfacing to excisting technologies used in e.g. music information retrieval, like VAMP plugins, or different analysis libraries like Essentia and Aubio . There is still an open question how to best organize the routing, mixing and shaping of the modulator signals. Ideally, the routing system should give an intuitive insight into the current mapping situation, provide very flexible mappping strategies, and being able to dynamically change from one mapping configuration to another. Perhaps we should look into the modulation matrix, as used in Hadron ?

Mapping analyzed features to processing parameters
When using the analyzer plugin, it is also a big question how do we design the mapping of analyzed features to effect processing parameters. Here we have at least 3 methods:

  1. manual mapping design
  2. autolearn features to control signal routing
  3. autolearn features to control effects parameters

For method 1) we have to continue experimenting to familiarize ourselves with the analyzed features and how they can be utilized in musically meaningful ways to control effects parameters. The process is cumbersome, as the analyzed features behave quite differently on different types of source signals. As an example, spectral flux will quite reliably give an indicator of the balance between noise and tone in a vocal signal, and even if it does so also on a guitar signal, the indication is more ambiguous and thus less reliable for use as a precise modulation source for parameter control. On the positive side, the complex process of familiarizing ourselves with the analysis signals will also give us an intuitive relation to the material, which is one cruicial aspect of being able to use it as performers.

Method 2) has been explored earlier by Brech De Man et al in an intelligent audio switch box . This plugin learns features (centroid, flatness, crest and roll-off) of an audio source signal and uses the learned features to classify the incoming audio, to be routed to either output A or B from the plugin. We could adapt this learning method to interpolate between effects parameter presets, much in the way the different states are interpolated in the Hadron synthesizer.

Method 3 can most probably be soved in a number of ways. Our current approach is based on a more extensive use of artificial intelligence methods than the classification in 2) above. The method is currently being explored by  NTNU/IDI master student Iver Jordal, and his work-in-progress toolkit is available here . The task for the A.I is to figure out the mapping between analyzed features and effects parameters based on which sound transformations are most successful. The objective measurement of successful is obviously a challenge, as a number of aesthetic considerations normally apply to what we would term a successful transformation of a piece of audio. As a starting point we assume that in our context, a successful transformation makes the processed sound pick up some features of the sound being analyzed. Here we label the analyzed sound as sound A ,  the input sound for processing is sound B , and the result of the processing is sound C. We can use a similarity measure between sound A and C to determine the degree of success for any given transformation (B is transformed to C, and C should become more similar to A than B already is). Iver Jordal implemented this idea using a genetic algorithmto evolve the connections of a neural network controlling the mapping from analyzed features to effect control parameters. Evolved networks can then later be transplanted to a realtime implementation, where the same connections be used on potentially different sounds. This means we can evolve modulation mappings suitable for different instrument combinations and try those out on a wider range of sounds. This application would also potentially gain from being able to interpolate between different modulator routings as described above.