Audio effect – Cross adaptive processing as musical intervention http://crossadaptive.hf.ntnu.no Exploring radically new modes of musical interaction in live performance Tue, 27 Nov 2018 13:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 116975052 Session with Kim Henry Ortveit http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/#respond Thu, 22 Feb 2018 10:12:12 +0000 http://crossadaptive.hf.ntnu.no/?p=1192 Continue reading "Session with Kim Henry Ortveit"]]> Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a Seaboard) and some drum pads. Modulations and patterns played on one part of the instrument will determine how other components of the instrument actually sound out. This is combined with some intricate layering, looping, and quantization techniques that allows shaping of the musical continuum in novel ways. Since the instrument in itself is crossadaptive between its consituent parts (and simply also because we think it sounds great), we wanted to experiment with it within the crossadaptive project too.


Kim Henry’s instrument

The session was done as an interplay between Kim Henry on his instrument and Øyvind Brandtsegg on vocals and crossadaptive processing. It was conducted as an initial exploration of just “seeing what would happen” and trying out various ways of making crossadaptive connections here and there. The two audio examples here are excerpts of longer takes.

Take 1, Kim Henry Ortveit and Øyvind Brandtsegg

 

2018_02_Kim_take2
 2018_02_Kim_take2

Take 2, Kim Henry Ortveit and Øyvind Brandtsegg

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/feed/ 0 1192
Session with Michael Duch http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/#respond Thu, 22 Feb 2018 10:11:02 +0000 http://crossadaptive.hf.ntnu.no/?p=1190 Continue reading "Session with Michael Duch"]]> February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is a step back in complexity from the crossadaptive interplay, but is interesting for two reasons: One is to check how useful our techniques of modulation is in a setting with more traditional performer control. Where there is only one performer modulating himself, there is a closer relationship between performer intention and timbral result. And two: the reason to do this specifically with Michael is that we know from his work with Lemur and other settings that he intently and intimately relates to the performance environment, the resonances of the room and the general ambience. Due to this focus, we also wanted to use live convolution techniques where he first records an impulse response and then himself play through the same filter. This exposed one feature needed in the live convolver, where one might want to delay the activation of the new impulse response until its recording is complete (otherwise we most certainly will get extreme resonances while recording, since the filter and the exitation is very similar). That technical issue aside, it was musically very interesting to hear how he explored resonances in his own instrument, and also used small amounts of detuning to approach beating effects in the relation between filter and exitation signal. The self-convolution also exposes parts of the instrument spectrum that usually is not so noticeable, like bassy components of high notes, or prominent harmonics that otherwise would be perceptually masked by their merging into the full tone of the instrument.

 

2018_02_Michael_test1
 2018_02_Michael_test1

Take 1,  autoadaptive exploration

2018_02_Michael_test2
 2018_02_Michael_test2

Take 2,  autoadaptive exploration

Self convolution

2018_02_Michael_conv1
 2018_02_Michael_conv1

Self-convolution take 1

2018_02_Michael_conv2
 2018_02_Michael_conv2

Self-convolution take 2

2018_02_Michael_conv3
 2018_02_Michael_conv3

Self-convolution take 3

2018_02_Michael_conv4
 2018_02_Michael_conv4

Self-convolution take 4

2018_02_Michael_conv5
 2018_02_Michael_conv5

Self-convolution take 5

2018_02_Michael_conv6
 2018_02_Michael_conv6

Self-convolution take 6

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/feed/ 0 1190
Crossadaptive seminar Trondheim, November 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/#respond Sun, 05 Nov 2017 18:27:58 +0000 http://crossadaptive.hf.ntnu.no/?p=1093 Continue reading "Crossadaptive seminar Trondheim, November 2017"]]> As  part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to more detailed documentation of each session as it becomes available. Each part will be added as the documentations is ready, so if something is missing now, do check back later. The seminar was also streamed here, and the recorded streams will be archived.

In addition to the researchers presenting, we also had an audience of students from the music technology and the jazz departments, as well as other researchers and teachers from NTNU. We are grateful for the input from the audience to enrich our discussions.

Program:

Thursday 2. November

Practical experiments

Introduction and status. [slides]
Øyvind Brandtsegg

 

Performance
Maja S.K. Ratkje, Øyvind Brandtsegg, Miller Puckette (standin for Stian Westerhus)

 

Work methods and session reports. Experiences, insights, reflections.
Trond Engum (with team)[slides],  Bernt Isak Wærstad (with team) [slides]

Instruments and tools

Instrumental affordances, crossadaptivity as instrumental gesture.
Marije Baalman [slides]

 


Performance
Tone Åse, Carl Haakon Waadeland, Trond Engum

 

Instrument design and technological developments. [slides]
Sigurd Saue, Victor Lazzarini, Øyvind Brandtsegg

 

Friday 3. November

Reflection. Aesthetic and philosophical issues

Documentation methods [slides]
Andreas Bergsland


Performance
Bjørnar Habbestad, Gyrid N. Kaldestad, Bernt Isak Wærstad

 

What does it mean for the performer, for the audience, for the music? How does it change the game?
Solveig Bøe [notes], Simon Emmerson [slides]

Wider use and perspectives

Experiences with Philippe Manoury and Juliana Snapper, thoughts on instrumental control, and a performance
Miller Puckette [PD patches]
(with Øyvind Brandtsegg for a brief liveconvolver performance)

 

Looking at the music from the mix perspective. Viability of effects as expression. The wider field of automatic mixing and adaptive effects.
Gary Bromham [slides], Josh Reiss [slides]

 

Outcomes and evaluation. [slides]
Moderator: Øyvind Brandtsegg

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/feed/ 0 1093
Liveconvolver experiences, San Diego http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/#respond Wed, 07 Jun 2017 20:25:55 +0000 http://crossadaptive.hf.ntnu.no/?p=868 Continue reading "Liveconvolver experiences, San Diego"]]> The liveconvolver has been used in several concerts and sessions in San Diego this spring. I played three concerts with the group Phantom Station (The Loft, Jan 30th, Feb 27th and Mar 27th), where the first involved the liveconvolver. Then one concert with the band Creatures (The Loft, April 11th), where the live convolver was used with contact mikes on the drums, and live sampling IR from the vocals. I also played a duo concert with Kjell Nordeson at Bread and Salt April 13th, where the liveconvolver was used with contact mikes on the drums, live sampling IR from my own vocals. Then a duo concert with Kyle Motl at Coaxial in L.A (April 21st), where a combination of crossadaptive modulation and live convolver was used. For the duo with Kyle, I switched between using bass as the IR and vocals as the IR, letting the other instrument play through the convolver. A number of studio sessions was also conducted, with Kjell Nordeson, Kyle Motl, Jordan Morton, Miller Puckette, Mark Dresser, and Steven Leffue. A separate report on the studio sesssion in UCSD Studio A will be published later.

“Phantom Station”, The Loft, SD

This group is based on Butch Morris’ conduction language for improvisation, and the performance typically requires a specific action (specific although it is free and open) to happen on cue from the conductor. I was invited into this ensemble and encouraged to use whatever part of my instrumentarium that I might see fit. Since I had just finished the liveconvoolver plugin, I wanted to try that out. I also figured my live processing techniques would fit easily, in case the liveconvolver did not work so well. Both the live processing and the live convolution instruments was in practice less than optimal for this kind of ensemble playing. Even though the instrumental response can be fast (low latency), the way I normally use these instruments is not for making a musical statements quickly for one second and then suddenly stop again. This leads me to reflect on a quality measure I haven’t really thought of before. For lack of a better word, let’s call it reactive inertia: the possibility to completely change direction on the basis of some external and unexpected signal. This is something else than the audio latency (of the audio processing) and also something else than the user interface latency (like for example, the time it takes the performer to figure out which button to turn to achieve a desired effect). I think it has to do with the sound production process, for example how some effects take time to build up before they are heard as a distinct musical statement, and also the inertia due to interaction between humans and also the signal chain of sound production pre effects (say if you live sample or live process someone, need to get a sample, or need to get some exciter signal). For live interaction instruments, the reactive inertia is then goverened by the time it takes two performers to react to the external stimuli, and their combined efforts to be turned into sound by the technology involved. Much like what an old man once told me at Ocean Beach:

“There’s two things that needs to be ready for you to catch a wave
– You, …and the wave”.

We can of course prepare for sudden shifts in the music, and construct instruments that will be able to produce sudden shifts and fast reactions. Still, the reaction to a completely unexpected or unfamiliar stimuli will be slower than optimal. An acoustic instrument has less of these limitations. For this reason, I switched to using the Marimba Lumina for the remaning two concerts with Phantom Station, to be able to shape immediate short musical statements with more ease.

Phantom Station

“Creatures”, The Loft, SD

Creatures is the duo of Jordan Morton (double bass, vocals) and Kai Basanta (drums). I had the pleasure of sitting in with them, making it a trio for this concert at The Loft. Creatures have some composed material in the form of songs, and combine this with long improvised stretches. For this concert I got to explore the liveconvolver quite a bit, in addition to the regular live processing and Marimba Lumina. The convolver was used with input from piezo pickups on the drums, convolving with IR live recorded from vocals. Piezo pickups can be very “impulse-like”, especially when used on percussive instruments. The pickups’ response have a generous amount of high frequencies, and a very high dynamic range. Due to the peaky impulse-like nature of the signal, it drives the convolution almost like a sample playback trigger, creating delay patterns on the input sound. Still the convolver output can become sustained and dense, when there is high activity on the triggering input. In the live mix, the result sounds somewhat similar to infinite reverb or “freeze” effects (using a trigger to capture a timbral snippet and holding that sound as long as the trigger is enabled). Here, the capture would be the IR recording, and the trigger to create and sustain the output is the activity on the piezo pickup. The causality and performer interface is very different than that of a freeze effect, but listening to it from the outside, the result is similar. These expressive limitations can be circumvented by changing the miking technique, and working in a more directed way as to what sounds goes into the convolver. Due to the relatively few control parameters, the main thing deciding how the convolver sounds is the input signals. The term causality in this context was used by Miller Puckette when talking about the relationship between performative actions and instrumental reactions.

Creatures + Brandtsegg
CreaturesTheLoft_mix1_mstr
 CreaturesTheLoft_mix1_mstr

Creatures at The Loft. A liveconvolver example can be found at 29:00 to 34:00 with Vocal IR, and briefly around 35:30 with IR from double bass.

“Nordeson/Brandtsegg duo”, Bread & Salt, SD

Duo configuration, where Kjell plays drums/perc and vibraphone, and I did live convolution, live processing and Marimba Lumina. My techniques was much like what I used with Creatures. The live convolver setup was also similar, with IR being live sampled from my vocals and the convolver being triggered by piezo pickups on Kjell’s drums. I had the opportunity to work over a longer period of time preparing for this concert together with Kjell. Because if this, we managed to develop a somewhat more nuanced utilization of the convolver techniques. Still, in the live performance situation on a PA, the technical situation made it a bit more difficult to utilize the fine grained control over the process and I felt the sounding result was similar in function to what I did together with Creatures. It works well like this, but there is potential for getting a lot more variation out of this technique.

Nordeson and Brandtsegg setup at Bread and Salt

We used a quadrophonic PA setup for this concert. Due to an error with the front-of-house patching, only 2 of the 4 lines from my electronics was recorded. Due to this fact, the mix is somewhat off balance. The recording also lacks first part of the concert, starting some 25 minutes into it.

NordesonBrandtsegg_mix1_mstr
 NordesonBrandtsegg_mix1_mstr

“The Gringo and the Desert”, Coaxial, LA

In this duo Kyle Motl plays double bass and I do vocals, live convolution, live processing, and also crossadaptive processing. I did not use the Marimba Lumina in this setting, so some more focus was allowed for the processing. In terms of crossadaptive processing, the utilization of the techniques is a bit more developed in this configuration. We’ve had the opportunity to work over several months, with dedicated rehearsal sessions focusing on separate aspects of the techniques we wanted to explore. As it happpened during the concert, we played one long set and the different techniques was enabled as needed. Parameters that was manually controlled in parts of the set, was then delegated to crossadaptive modulations in other parts of the set. The live convolver was used freely as one out of several active live processing modules/voices. The liveconvolver with vocal IR can be heard for example from 16:25 to 20:10. Here, the IR is recorded from vocals, and the process acts as a vocal “shade” or “pad”, creating long sustained sheets of vocal sound triggeered by the double bass. Then, liveconvolver with bass IR from 20:10 to 23:15, where we switch on to full crossadaptive modulation until the end of the set. We used a complex mapping designed to respond to a variety of expressive changes. Our attitude/approach as performers was not to intellectually focus on controlling specific dimensions but to allow the adaptive processing to naturally follow whatever happened in the music.

Gringo and the Desert soundcheck at Coaxial, L.A

coaxial_Kyle_Oeyvind_mix2_mstr
 coaxial_Kyle_Oeyvind_mix2_mstr

Gringo and the Desert at Coaxial DTLA, …yes the backgorund noise is the crickets outside.

Session with Steven Leffue (Apr 28th, May 5th)

I did two rehearsal sessions together with Steven Leffue in April, as preparation for the UCSD Studio A session in May. We worked both on crossadaptive modulations and on live convolution. Especially interesting with Steven is his own use of adaptive and crossadaptive techniques. He has developed a setup in PD, where he tracks transient density and amplitude envelope over different time windows, and also uses standard deviation of transient density within these windows. The windowing and statistics he use can act somewhat like a feature we have also discussed in our crossadaptive project: a method of making an analysis “in relation to the normal level” for a given feature. Thus, a way to track relative change. Steven’s Master thesis “Musical Considerations in Interactive Design for Performance” relates to this and other issues of adaptive live performance. Notable is also his ICMC paper “AIIS: An Intelligent Improvisational System”. His music can be heard at http://www.stevenleffue.com/music.html, where the adaptive electronics is featured in “A theory of harmony” and “Futures”.
Our first session was mainly devoted to testing and calibrating the analysis methods towards use on the saxophone. In very broad terms, we notice that the different analysis streams now seem to work relatively similar on different instruments. The main diffferences are related to extraction of tone/noise balance, general brightness, timbral “pressedness” (weight of formants), and to some extent in transient detection and pitch tracking. The reason why the analysis methods now appear more robust is partly due to refinements in their implementation, and partly due to (more) experience in using them as modulators. Listening, experimentation, tweaking, and plainly just a lot of spending-time-with-them, have made for a more intuitive understanding of how each analysis dimension relates to an audio signal.
The second session was spent exploring live convolution between Sax and Vocals. Of particular interest here is the comments from Steven regarding the performative roles of IR recording vs playing the convolver. Steven states quite strongly that the one recording the IR has the most influence over the resulting music. This impression is consistent when he records the IR (and I sing through it), and when I record the IR and he plays through it. This may be caused by several things, but of special interest is that it is diametrically opposed to what many other performers have stated. Both Kyle, Jordan and Kjell in our initial sessions, voiced a higher performative intimacy, a closer connection to the output when playing through the IR. Maybe Steven is more concerned with the resulting timbre (including processed sound) than the physical control mechanism, as he routinely designs and performs with his own interactive electronics rig. Of course all musician care about the sound, but perhaps there is a difference of approach on just how to get there. With the liveconvolver we put the performers in an unfamiliar situation, and the differences in approach might just show different methods of problems solving to gain control over this situation. What I’m trying to investigate is how the liveconvolver technique works performatively, and in this, the performer’s personal and musical traits plays into the situation quite strongly. Again, we can only observe single occurences and try to extract things that might work well. There is no conclusions to be drawn on a general basis as to what works and what does not, and neither can we conclude what is the nature of this situation and this tool. One way of looking at it (I’m still just guessing) is that Steven treats the convolver as *the environment* in which music can be made. The changes to the environment determines what can be played and how that will sound, and thus, the one recording the IR controls the environment and subsequently controls the most important factor in determining the music.
In this session, we also experimented a bit with transposed and revesed IR, this being some of the parametric modifications we can make to the IR with our liveconvolver technique. Transposing can be interesting, but also quite difficult to use musically. Transposing in octave intervals can work nicely, as it will act just as much as a timbral colouring without changing pitch class. A fun fact about reversed IR as used by Steven: If he played in the style of Charlie Parker and we reversed the IR, it would sound like Evan Parker. Then, if he played like Evan Parker and we reversed the IR, it would still sound like Evan Parker. One could say this puts Evan Parker at the top of the convolution-evolutionary-saxophone tree….

Steven Leffue

2017_05_StevenOyvLiveconv_VocIR_mix3_mstr
 2017_05_StevenOyvLiveconv_VocIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, IR recorded by vocals.

2017_05_StevenOyvLiveconv_SaxIR_mix3_mstr
 2017_05_StevenOyvLiveconv_SaxIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, IR recorded by Sax.

2017_05_StevenOyvLiveconv_reverseSaxIR_mix3_mstr
 2017_05_StevenOyvLiveconv_reverseSaxIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, time reversed IR recorded by Sax.

 

Session with Miller Puckette, May 8th

The session was intended as “calibration run”, to see how the analysis methods responded to Miller’s guitar. This as a preparation for the upcoming bigger session in UCSD Studio A. The main objective was to determine which analysis features would work best as expressive dimensions, find the appropriate ranges, and start looking at potentially useful mappings. After this, we went on to explore the liveconvolver with vocals and guitar as the input signals. Due to the “calibration run” mode of approach, the session was not videotaped. Our comments and discussion was only dimly picked up by the microphones used for processing. Here’s a transcription of some of Millers initial comments on playing with the convolver:

“It is interesting, that …you can control aspects of it but never really control the thing. The person who’s doing the recording is a little bit less on the hook. Because there’s always more of a delay between when you make something and when you hear it coming out [when recording the IR]. The person who is triggering the result is really much more exposed, because that person is in control of the timing. Even though the other person is of course in control of the sonic material and the interior rhythms that happen.”

Since the liveconvolver has been developed and investigated as part of the research on crossadaptive techniques, I had slipped into the habit of calling it a crossadaptive technique. In discussion with Miller, he pointed out that the liveconvolver is not really *crossadaptive* as such. BUT it involves some of the same performative challenges, namely playing something that is not played solely for the purpose of it’s own musical value. The performers will sometimes need to play something that will affect the sound of the other musician in some way. One of the challenges is how to incorporate that thing into the musical narrative, taking care of how it sounds in itself, and exactly how it will affect the other performer’s sound. Playing with liveconvolver has this performative challenge, as has the regular crossadaptive modulation. One thing the live convolver does not have is the reciprocal/two-way modulation, it is more of a one-way process. The recent Oslo session on liveconvolution used two liveconvolvers simultaneously to re-introduce the two-way reciprocal dependency.

Miller Puckette

2017_05_liveconv_OyvMiller1_M_IR
 2017_05_liveconv_OyvMiller1_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller2_M_IR
 2017_05_liveconv_OyvMiller2_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller3_M_IR
 2017_05_liveconv_OyvMiller3_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller4_O_IR
 2017_05_liveconv_OyvMiller4_O_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by vocals.

2017_05_liveconv_OyvMiller5_O_IR
 2017_05_liveconv_OyvMiller5_O_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by vocals.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/feed/ 0 868
Live convolution session in Oslo, March 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/#comments Wed, 07 Jun 2017 13:28:06 +0000 http://crossadaptive.hf.ntnu.no/?p=902 Continue reading "Live convolution session in Oslo, March 2017"]]> Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation).

The focus for this session was to work with the new live convolver in Ableton Live

Setup – getting to know the Convolver

We worked in duo configurations – flute/guitar, guitar/vocal and vocal/flute

We started with spending some time exploring and understanding the controls. Our first setup was guitar/flute and we chose to start convolving in auto mode. We knew both from experience with convolution in general and from previous live convolver session reports, that sustained and percussive sounds would yield very different results and we therefore started with combinations: percussive sounds (flute) with sustained (guitar). While this made it quite clear how the convolver worked, the output was less than impressive. Next step was to switch the inputs while preserving the playing technique. Still everything seemed to sound somewhat delayed with ringing overtones. It was suggested to add in some dry signal to produce more aesthetically pleasing sounds, but at this point we decided to only listen to the wet signal, as the main goal was to explore and understand the ways of the convolver.

Sonics of convolution

We continued the process by switching from auto to triggered mode where the flute had the role of the IR to make the convolver a bit more responsive. This produced a few nice moments, but the overall result was still quite “mushy”. We explored reducing the IR size and working with IR pitch to explore ways of getting other sound qualities.

We subsequently switched from flute to vocal, working with guitar and voice in trigger mode where voice was used as IR. We also decided to add in some dry sound from the vocal, since the listeners (Mats and Bjørnar) found it much more interesting when we could hear the relation between the IR and the convolved signal. Since the convolver still felt quite slow in response and muddy in sound, we also tried out very short IR size in auto mode

As shown in this video, we tried to find out how the vocal sound could affect the processing of the flute. The experience was either that the got some strange reverberation or some strange eq- we thought that the sound that came out of the processed flute was not so interesting as we had hoped!

Trying again – doesn’t feel like we get any interesting sounds. Is it the vocal input that doesn’t give the flute sound anything to work with? The flute sound we get out is still a bit harsh in the higher frequencies and with a bit of strange reverberation.

Critical listening

All these initial tests were more or less didactic, in that we chose fixed materials (such as percussive versus sustained) in order to emphasise the effect of the convolver. After three short sessions this became a limitation that hindered improvisational flow and phrasing. Especially in the flute/vocal session this was an issue. Too often, the sonic result of the convolution was less than intriguing. We discussed whether the live convolver would lend itself more easily to composed situations, as the necessity of carefully selecting material types that would convolve in an interesting way rendered it less usable in improvised settings. We decided to add more control to the setup in order to remedy this problem.

Adding control

Gyrid commented that she felt the system was quite limiting, especially when you are used to control live processing yourself. To remedy this, we started adding parameter controls for the performers. In a session with vocal and guitar, we added a expression pedal for Bernt Isak (guitar) to control the IR size. This was the first time we had the experience of a responsive system that made musical sense.

Revised setup

After some frustrating, and from our perception, failed attempts at getting interesting musical results, we decided to revise our setup. After some discussion we came to the conclusion that our existing setup, using one live convolver, was cross adaptive in the signal control domain, but didn’t feel cross adaptive in the musical domain. Therefore we decided to add another crossing by using two live convolvers, where each instrument had the role as IR in one convolver and as input in the other convolver. We also decided to set one convolver to auto mode and the other to trigger mode for better separation and more variation in musical output.

Guitar

  • Fader 1: IR size
  • Fader 2: IR auto update-rate
  • Fader 3: Dry sound
  • Expression pedal 1: IR pitch
  • Toggle switch: switch inputs

Vocal

  • Fader 1: IR size
  • Fader 2: IR pitch

Flute

  • Fader 1: IR size
  • Fader 2: IR pitch
  • Fader 3: Dry sound

Convolvers were placed on return tracks – both panned slightly to the sides to easier distinguish the two convolvers, while also adding some stereo width.

Sound excerpt 1- flute & vocal, using two convolvers:

Bjørnar has the same setup as Bernt Isak. Better experience. It could change the way the convolver use the signal if we use different microphones – maybe one inside and one outside the flute.

It’s quite apparent to us that using sustained sounds doesn’t work very well. It seems to us that the effect just makes the flute sound less interesting, it somehow reduces the bandwidth, amplifies the resonant frequencies  or just make some strange phasing. The soundscape is changing and gets more interesting when we shift to more percussive and distorted sound qualities. Could it be an idea to make a it possible to extract only the distorted parts of a sound input?

Sound excerpt 2- guitar & vocal, using two convolvers:

Session with guitar and vocal where we control the IR size, IR pitch and IR rate.

Gyrid has the input signal in trigger mode and can control IR size and pitch with faders. Bernt Isak  has the input signal in auto mode –  and can control amount of dry signal,  IR size and rate with faders and  pitch with an expression pedal. Very positive experience to use two convolvers! Even though one convolver is cross adaptive in the sense that it uses two signals, it didn’t feel cross adaptive musically, but more like a traditional live processing setup. We also found that having one convolver in trigger mode and one in auto mode was a good way of adding movement and variation in the music as one convolver would keep a more steady “timing”, while the other one can be completely free. It also seems essential to have the possibility to control the dry signal – hearing the dry signal makes the music more three dimensional.

Sound excerpt 3- flute & guitar, using two convolvers:

Session with guitar and flute – Bjørnar has the same setup as Gyrid, but with added control for amount of dry sound. Same issue with flute microphone as above

The experience is very different with flute and vocals and guitar and vocals, this has mainly to do with the way the instruments are played. The guitar has a very distinct attack and it is very clear when the timbral character change. Flute and vocals have a more similar frequency response and the result get less interesting. Adding more effects to the guitar (distortion + tremolo) makes a huge difference- but also the fact that percussive sounds from vocal gives the most interesting musical output.

 

Overall session reflections 

Choice of instrument combinations is crucial for live convolving to be controllable and produce artistically interesting results. We also noted that there is a difference between signal cross adaptiveness and musical cross adaptiveness. From our experience, double live convolving is needed to produce a similar feel of musical cross adaptiveness as we’ve experienced from the previous signal processing cross adaptive sessions:

Ideas for further development

Some ideas for adding more control – the possibility to switch between auto mode and trigger mode could be interesting. It would also be useful with a visual indicator for the IR trigger mode for easier tuning of the trigger settings.

Bjørnar suggested to combine the live convolver with the analyzer/midimapper in order to only convolve when there is a minimum of noise and/or transients available. E.g. linking spectral crest to a wet/dry parameter in the convolution or splitting the audio signal based on spectral conditions.

It could perhaps also yield some interesting result to add some spectral processing to reduce the fundamental frequency component (similar to Øyvind Brandtseggs feedback tools) for instruments that have a very strong fundamental (like flute).

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/feed/ 1 902
The entrails of Open Sound Control, part one http://crossadaptive.hf.ntnu.no/index.php/2017/04/07/the-entrails-of-open-sound-control-part-one/ http://crossadaptive.hf.ntnu.no/index.php/2017/04/07/the-entrails-of-open-sound-control-part-one/#respond Fri, 07 Apr 2017 21:42:27 +0000 http://crossadaptive.hf.ntnu.no/?p=815 Continue reading "The entrails of Open Sound Control, part one"]]> Many of us are very used to employing the Open Sound Control (OSC) protocol to communicate with synthesisers and other music software. It’s very handy and flexible for a number of applications. In the cross adaptive project, OSC provides the backbone of communications between the various bits of programs and plugins we have been devising.

Generally speaking, we do not need to pay much attention to the implementation details of OSC, even as developers. User-level tasks only require us to decide the names of messages addresses, its types and the source of data we want to send. At Programming level,  it’s not very different: we just employ an OSC implementation from a library (e.g. liblo, PyOSC) to send and receive messages.

It is only when these libraries are not doing the job as well as we’d like that we have to get our hands dirty. That’s what happened in the past weeks at the project. Oeyvind has diagnosed some significant delays and higher than usual cost in OSC message dispatch. This, when we looked, seemed to stem from the underlying implementation we have been using in Csound (liblo, in this case). We tried to get around this by implementing an asynchronous operation, which seemed to improve the latencies but did nothing to help with computational load. So we had to change tack.

OSC messages are transport-agnostic, but in most cases use the User Datagram Protocol transport layer to package and send messages from one machine (or program) to another. So, it appeared to me that we could just simply write our own sender implementation using UDP directly. I got down to programming an OSCsend opcode that would be a drop-in replacement for the original liblo-based one.

OSC messages are quite straightforward in their structure, based on 4-byte blocks of data. They start with an address, which is a null-terminated string like, for instance, “/foo/bar”  :

'/' 'f' 'o' 'o' '/' 'b' 'a' 'r' '\0'

This, we can count, has 9 characters – 9 bytes – and, because of the 4-byte structure, needs to be padded to the next multiple of 4, 12, by inserting some more null characters (zeros). If we don’t do that, an OSC receiver would probably barf at it.

Next, we have the data types, e.g. ‘i’, ‘f’, ‘s’ or ‘b’ (the basic types). The first two are numeric, 4-byte integers and floats, respectively. These are to be encoded as big-endian numbers, so we will need to byteswap in little-endian platforms before the data is written to the message. The data types are encoded as a string with a starting comma (‘,’) character, and need to conform to 4-byte blocks again. For instance, a message containing a single float would have the following type string:

',' 'f' '\0'

or “,f”. This will need another null character to make it a 4-byte block. Following this, the message takes in a big-endian 4-byte floating-point number.  Similar ideas apply to the other numeric type carrying integers.

String types (‘s’) denote a null-terminated string, which as before, needs to conform to a length that is a multiple of 4-bytes. The final type, a blob (‘b’), carries a nondescript sequence of bytes that needs to be decoded at the receiving end into something meaningful. It can be used to hold data arrays of variable lengths, for instance. The structure of the message for this type requires a length (number of bytes in the blob) followed by the byte sequence. The total size needs to be a multiple of 4 bytes, as before. In Csound, blobs are used to carry arrays, audio signals and function table data.

If we follow this recipe, it is pretty straightforward to assemble a message, which will be sent as a UDP packet. Our example above would look like this:

'/' 'f' 'o' 'o' '/' 'b' 'a' 'r' '\0' '\0' '\0' '\0'
',' 'f' '\0' '\0' 0x00000001

This is what OSCsend does, as well as its new implementation. With it, we managed to provide a lightweight (low computation cost) and fast OSC message sender. In the followup to this post, we will look at the other end, how to receive arbitrary OSC messages from UDP.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/04/07/the-entrails-of-open-sound-control-part-one/feed/ 0 815
Seminar on instrument design, software, control http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/ http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/#respond Fri, 24 Mar 2017 20:33:12 +0000 http://crossadaptive.hf.ntnu.no/?p=776 Continue reading "Seminar on instrument design, software, control"]]> Online seminar March 21

Trond Engum and Sigurd Saue (Trondheim)
Bernt Isak Wærstad (Oslo)
Marije Baalman (Amsterdam)
Joshua Reiss (London)
Victor Lazzarini (Maynooth)
Øyvind Brandtsegg (San Diego)

Instrument design, software, control

We now have some tools that allow practical experimentation, and we’ve had the chance to use them in some sessions. We have some experience as to what they solve and don’t solve, how simple (or not) they are to use. We know that they are not completely stable on all platforms, there are some “snags” on initialization and/or termination that give different problems for different platforms. Still, in general, we have just enough to evaluate the design in terms of instrument building, software architechture, interfacing and control.

We have identified two distinct modes of working crossadaptively: The Analyzer-Modulator workflow, and a Direct-Cross-Synthesis workflow. The Analyzer-Modulator method is comprised of extracting features, and arbitrarily mapping these features as modulators to any effect parameter. The Direct-Cross-Synthesis method is comprised by a much closer interaction directly on the two audio signals, for example as seen with the liveconvolver and or different forms of adaptive resonators. These two methods give very different ways of approaching the crossadaptive interplay, with the direct-cross-synthesis method being perceived as closer to the signal, and as such, in many ways closer to each other for the two performers. The Analyzer-Modulator approach allows arbitrary mappings, and this is both a strngth and a weakness. It is powerful by allowing any mapping, but it is harder to find mappings that are musically and performatively engaging. At least this can be true when a mapping is used without modification over a longer time span. As a further extension, an even more distanced manner of crossadaptive interplay was recently suggested by Lukas Ligeti (UC Irvine, following Brandtsegg’s presentation of our project there in January). Ligeti would like to investigate crossadaptive modulation on MIDI signals between performers. The mapping and processing options for event-based signals like MIDI would have even more degrees of freedom than what we achieve with the Analyzer-Modulator approach, and it would have an even greater degree of “remoteness” or “disconnectedness”. For Ligeti, one of the interesting things is the diconnectedness and how it affects our playing. In perspective, we start to see some different viewing angles on how crossadaptivity can be implemented and how it can influence communication and performance.

In this meeting we also discussed problems of the current tools, mostly concerned with the tools of the Analyzer-Moduator method, as that is where we have experienced the most obvious technical hindrance for effecttive exploration. One particular problem is the use of MIDI controller data as our output. Even though it gives great freedom in modulator destinations, it is not straightforward for a system operator to keep track of which controller numbers are actively used and what destinations they correspond to. Initial investigations of using OSC in the final interfacing to the DAW have been done by Brandtsegg, and the current status of many DAWs seems to allow “auto-learning” of OSC addresses based on touching controls of the modulator destination within the DAW. a two-way communication between the DAW and ouurr mapping module should be within reach and would immensely simplify that part of our tools.
We also discussed the selection of features extracted by the Analyzer, whic ones are more actively used, if any could be removed and/or if any could be refined.

Initial comments

Each of the participants was invited to give their initial comments on these issues. Victor suggests we could rationalize the tools a bit, simplify, and get rid of the Python dependency (which has caused some stability and compatibility issues). This should be done without loosing flexibility and usability. Perhaps a turn towards the originally planned direction of reying basically on Csound for analysis instead of external libraries. Bernt has had some meetings with Trond recently and they have some common views. For them it is imperative to be able to use Ableton Live for the audio processing, as the creative work during sessions is really only possible using tools they are familiar with. Finding solutions to aesthetic problems that may arise require quick turnarounds, and for this to be viable, familiar processing tools.  There have been some issues related to stability in Live, which have sometimes significantly slowed down or straight out hindered an effective workflow. Trond appreciates the graphical display of signals, as it helps it teaching performers how the analysis responds to different playing techniques.

Modularity

Bernt also mentions the use of very simple scaled-down experiments directly in Max, done quickly with students. It would be relatively easy to make simple patches that combines analysis of one (or a few) features with a small number of modulator parameters. Josh and Marije also mentions modularity and scaling down as measures to clean up the tools. Sigurd has some other perspectives on this, as it also relates to what kind of flexibility we might want and need, how much free experimentation with features, mappings and desintations is needed, and also to consider if we are making the tools for an end user or for the research personell within the project. Oeyvind also mentions some arguments that directly opposes a modular structure, both in terms of the number of separate plugins and separate windows needed, and also in terms of analyzing one signal with relation to activity in another (f.ex. for cross-bleed reduction and/or masking features etc).

Stability

Josh asks about the stability issues reported. any special feature extractors, or other elements that have been identified that triggers instabilities. Oeyvind/Victor discuss a bit about the Python interface, as this is one issue that frequently come up in relation to compatibility and stability. There are things to try, but perhaps the most promising route is to try to get rid of the Python interface. Josh also asks about the preferred DAW used in the project, as this obviously influence stability. Oeyvind has good experience with Reaper, and this coincides with Josh’s experience at QMUL. In terms of stability and flexibility of routing (multichannel), Reaper is the best choice. Crossadaptive work directly in Ableton Live can be done, but always involve a hack. Other alternatives (Plogue Bidule, Bitwig…) are also discussed briefly. Victor suggests selecting a reference set of tools, which we document well in terms of how to use them in our project. Reaper has not been stable for Bernt and Trond, but this might be related to  setting of specific options (running plugins in separate/dedicated processes, and other performance options). In any case, the two DAWs of immediate practical interest is Reaper (in general) and Live (for some performers).  An alternative to using a DAW to host the Analyzer might also be to create a standalone application, as a “server”, sending control signals to any host. There are good reasons for keeping it within the DAW, both as session management (saving setups)  and also for preprocessing of input signals (filtering, dynamics, routing).

Simplify

Some of the stability issues can be remedied by simplifying the analyzer, getting rid of unused features, and also getting rid of the Python interface. Simplification will also enable use for less trained users, as it enable self-education and ability to just start using it and experiment. Modularity might also enhance such self-education, but a take on “modularity” might simply hiding irrelevant aspects of the GUI.
In terms of feature selection the filtering of GUI display (showing only a subset) is valuable. We see also that the number of actively used parameters is generally relatively low, our “polyphonic attention” for following independent modulations generally is limited to 3-4 dimensions.
It seems clear that we have some overshoot in terms of flexibility and number of parameters in the current version of our tools.

Performative

Marije also suggests we should investigate further what happens on repeated use. When the same musicians use the same setup several times over a period of time, working more intensively, just play, see what combinations wear out and what stays interesting. This might guide us in general selection of valuable features. Within a short time span (of one performence), we also touched briefly on the issue of using static mappings as opposed to changing the mapping on the fly. Giving the system operator a more expressive role, might also solve situations where a particular mapping wears our or becomes inhibiting over time. So far we have created very repeatable situations, to investigate in detail how each component works. Using a mapping that varies over time can enable more interesting musical forms, but will also in general make the situation more complex. Remembering how performers in general can respond positively to a certain “richness” of the interface (tolerating and even being inspired by noisy analysis), perhaps varying the mapping over time also can shift the attention more on to the sounding result and playing by ear holistically, than intellectually dissecting how each component contributes.
Concluding remarks also suggests that we still need to play more with it, to become more proficient, having more control, explore and getting used to (and getting tired of) how it works.

 

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/feed/ 0 776
Audio effect: Liveconvolver3 http://crossadaptive.hf.ntnu.no/index.php/2016/09/19/audio-effect-liveconvolver3/ http://crossadaptive.hf.ntnu.no/index.php/2016/09/19/audio-effect-liveconvolver3/#comments Mon, 19 Sep 2016 20:59:51 +0000 http://crossadaptive.hf.ntnu.no/?p=432 Continue reading "Audio effect: Liveconvolver3"]]>

The convolution audio effect is traditionally used to sample a room to create artificial reverb. Others have used it extensively for creative purposes, for example convolving guitars with angle grinders and trains. The technology normally requires recording a sound, then analyzing it and then finally loading the analyzed impulse response (IR) into an effect to use it. The Liveconvolver3 let you live sample the impulse response and start convolving even before the recording is finished. 

In the context of the crossadaptive project, convolution can be a nice way of imprinting the characteristics of one audio source on another. The live sampling of the IR is necessary to facilitate using it in an improvised manner, reacting immediately to what is played here and now.

There are some aesthetic challenges, namely how to avoid everything turning into a (somewhat beautiful) mush. This is because in convolution all samples  of one sound is multiplied with every sample of the other sound. If we sample a long melodic line as the IR, a mere click of the toungue on the other audio channel will fire the whole melodic segment once. Several clicks will create separate echoes of the melody, and a coninuous sound will create literally thousands of echoes. What is nice is that only frequencies that the two signals have in common will come out of the process. So a light whisper will create a high frequency whispering melody (with the long IR described above), while a deep and resonant drone will just let those (spectral) parts of the IR through. Since the IR contains a recording not only of spectral content but also of its evolution over  time, it can lend spectrotemporal morphing features from one sound to another. To reduce the mushyness of the processed sound, we can enhance the transients and reduce the sustained parts of the input sound. Even though this kind of (exaggerated) transient designer processing might sound artificial on its own, it can work well in the context of convolutions. The current implementation, Liveconvolver3, does not include this kind of transient processing, but we have done this earlier so it will be easy to add.

There are also some technical challenges to using this technique in a live setting. These are related to amplitude control, and to the risk of feedback when playing on larger speaker systems. The feedback risk occurs because we are taking a spectral snapshop (the impulse response) of the room we are currently playing in (well, of an instrument in that room, but nevertheless, the room is there), then we process sound coming from (another source in) the same room. The output of the process will enhance those frequencies that the two sources have in common, hence the characteristics of the room (and the speaker system) will be amplified, and this generally creates the risk of feedback to arise. Once we have unwanted feedback with convolution, it will also generally take a while (a few seconds) to get rid of, since the nature of the process creates a revereb-like tail to every sound. To reduce the risk of feedback we use a very small frequency shift of the convolver output. This is not usually perceptible, but it disturbs the feedback chain sufficiently to significantly reduce the feedback potential.

The challenge of the overall amplitude control can be tackled by using the sum of all amplitudes in the IR as a normalization factor. This works reasonably well, and is how we do it in the liveconvolver. One obvious exeption being in the case where the IR and the input sound contains overlapping strong resonances (or single lone notes). Then we will get a lot of energy on those overlapping frequency regions, and very little else. We will work on algorithms to attempt normalization in these cases as well.

The effect

liveconvolver3_reaper_setup
Liveconvolver3 in an example setup in Reaper. Note the routing of the source signals to the two inputs of the effect (aux sends with pan).

The effect uses two separate audio inputs, one for the impulse response sampling, and one for the live input to be convolved.  We have made it as a stereo effect, but do not expect it to convolve a stereo input. It also creates a mono output in the current implementation (the same signal on both stereo outputs). In the figure we see two input sources. Track 1 receives external audio, and routes it to an aux send to the liveconvolver track, panned left so that it will enter only input 1 to the effect.. Track 2 receives external audio and similarly routes it to an aux send to the liveconvolver track, but panned right so the audio is only sent to input 2 of the effect.

The effect itself has contols for input level, highpass filtering (hpFreq), lowpass frequency (lpFreq) and output volume (convVolume). These controls basically do what the control name says. Then we have controls to set the start time (IR_start) of the impulse response (allow skipping a certain number of seconds into the recording), and the impulse response length (IR_length), determining how many seconds of the IR recording we want to use. There are also controls for fading the IR in and out. Without fading, we might experience clicks and pops in the output. The partition length sets the size of partitioned convolution, higher settings will require less CPU but will also make it respond slower. Usually just leave this at the default 2048. The big green button IR_record enables recording of an impulse response. The current max duration is 5.9 seconds at 44.1 kHz sampling rate. If the maximum duration is exceeded during recording, the recording simply stops and is treated as complete. The convolution process will keep running while recording, using parts of the newly recorded IR as they become available. The IR_release knob controls the amount of overlap between the new instances of convolution created during recording. When recording is done, we fall back to using just one instance again. Finally, the switch_inputs button let us (surprise!) switch the two inputs, so that input 1 will be the IR record and input 2 will be the convolver input. If you want to convolve a source with itself, you would first record an IR then switch the inputs so that the same source would be convolved with its own (previously recorded) IR. Finally, to reduce the potential of audio feedback, the f_shift control can be adjusted. This shifts the entire output upwards by the amount selected. Usually around 1 Hz is sufficient. Extreme settings will create artificial sounding effects and cascading delays.

Installation

The effect is written in the audio programming language Csound, and compiled into a VST plugin using a tool called Cabbage. The actual program code is just a small text file (a csd) that you can download here.

You will need to download Cabbage (the bleeding edge version can be found here), then open the csd file in Cabbage and export it as a plugin effect. Put the exported plugin somewhere in your VST path so that your favourite DAW can find it. Then you’re all set.

cabbage_export_liveconvolver
Export as plugin effect in Cabbage

 

Routing in other hosts

As a short update, I just came to think that some users might find it complicated to translate that Reaper routing setup to other hosts. I know a lot of people are using Ableton Live, so here’s a screenshot of how to route for the liveconvolver in Live:

liveconvolver3_live_setup
Example setup with the liveconvolver in Live

Note that

  • the aux sends are “post” (otherwise the sound would not go through the pan pot, and we need that).
  • Because the sends are post, the volume fader has to be up. We will probably not want to hear the direct unprocessed sound, so the “Audio To” selector on the channels is set to “Sends only”
  • Both input channels send to the same effect
  • The two input channel are panned hard left (ch 1) and hard right (ch 2)
  • The monitor selector for the channels is set to “in”, activating the input regardless of arm/recording

Whith all that set up, you can hit “IR_record” and record an IR (of the sound you have on channel 1). The convolver effect will be applied to the sound on channel 2.

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2016/09/19/audio-effect-liveconvolver3/feed/ 2 432
Evolving Neural Networks for Cross-adaptive Audio Effects http://crossadaptive.hf.ntnu.no/index.php/2016/06/27/evolving-neural-networks-for-cross-adaptive-audio-effects/ http://crossadaptive.hf.ntnu.no/index.php/2016/06/27/evolving-neural-networks-for-cross-adaptive-audio-effects/#comments Mon, 27 Jun 2016 12:29:15 +0000 http://crossadaptive.hf.ntnu.no/?p=284 Continue reading "Evolving Neural Networks for Cross-adaptive Audio Effects"]]> I’m Iver Jordal and this is my first blog post here. I have studied music technology for approximately two years and computer science for almost five years. During the last 6 months I’ve been working on a specialization project which combines cross-adaptive audio effects and artificial intelligence methods. Øyvind Brandtsegg and Gunnar Tufte were my supervisors.

A significant part of the project has been about developing software that automatically finds interesting mappings (neural networks) from audio features to effect parameters. One thing that the software is capable of is making one sound similar to another sound by means of cross-adaptive audio effects. For example, it can process white noise so it sounds like a drum loop.

Drum loop (target sound):

White noise (input sound to be processed):

Since the software uses algorithms that are based on random processes to achieve its goal, the output varies from run to run. Here are three different output sounds:

These three sounds are basically white noise that have been processed by distortion and low-pass filter. The effect parameters were controlled dynamically in a way that made the output sound like the drum loop (target sound).

This software that I developed is open source, and can be obtained here:

https://github.com/iver56/cross-adaptive-audio

It includes an interactive tool that visualizes output data and lets you listen to the resulting sounds. It looks like this:

visualization-screenshot
For more details about the project and the inner workings of the software, check out the project report:

Evolving Artificial Neural Networks for Cross-adaptive Audio (PDF, 2.5 MB)

Abstract:

Cross-adaptive audio effects have many applications within music technology, including for automatic mixing and live music. The common methods of signal analysis capture the acoustical and mathematical features of the signal well, but struggle to capture the musical meaning. Together with the vast number of possible signal interactions, this makes manual exploration of signal mappings difficult and tedious. This project investigates Artificial Intelligence (AI) methods for finding useful signal interactions in cross-adaptive audio effects. A system for doing signal interaction experiments and evaluating their results has been implemented. Since the system produces lots of output data in various forms, a significant part of the project has been about developing an interactive visualization tool which makes it easier to evaluate results and understand what the system is doing. The overall goal of the system is to make one sound similar to another by applying audio effects. The parameters of the audio effects are controlled dynamically by the features of the other sound. The features are mapped to parameters by using evolved neural networks. NeuroEvolution of Augmenting Topologies (NEAT) is used for evolving neural networks that have the desired behavior. Several ways to measure fitness of a neural network have been developed and tested. Experiments show that a hybrid approach that combines local euclidean distance and Nondominated Sorting Genetic Algorithm II (NSGA-II) works well. In experiments with many features for neural input, Feature Selective NeuroEvolution of Augmenting Topologies (FS-NEAT) yields better results than NEAT.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2016/06/27/evolving-neural-networks-for-cross-adaptive-audio-effects/feed/ 2 284