Presentations – Cross adaptive processing as musical intervention http://crossadaptive.hf.ntnu.no Exploring radically new modes of musical interaction in live performance Tue, 27 Nov 2018 13:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 116975052 Various presentations and papers 2018 http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/various-presentations-and-papers-2018/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/various-presentations-and-papers-2018/#respond Thu, 11 Oct 2018 12:02:30 +0000 http://crossadaptive.hf.ntnu.no/?p=1262 Continue reading "Various presentations and papers 2018"]]> The crossadaptive project was presented at several conferences in 2018, and several written publications were made.

At the NIME conference (New Instruments for Musical Expression) at Virginia Tech, we presented a paper: “Working Methods and Instrument Design for Cross-Adaptive Sessions” by Oeyvind Brandtsegg, Trond Engum & Bernt Isak Wærstad
http://nime2018.icat.vt.edu/

We published a paper on the developments of new convolution techniqiues in Applied Sciences: “Live Convolution with Time-Varying Filters”
by Øyvind Brandtsegg, Sigurd Saue and Victor Lazzarini.
https://www.mdpi.com/2076-3417/8/1/103

Brandtsegg presented the project at the Forum IRCAM Workshops in March 2018.
http://forumnet.ircam.fr/

Brandtsegg and Engum presented the project at the European Platform for Artistic research in Music 2018, in Porto.
https://www.aec-music.eu/events/european-platform-for-artistic-research-in-music-2018

We published a paper for Frontiers in Digital Humanities: “Applications of Cross-Adaptive Audio Effects: Automatic Mixing, Live Performance and Everything in Between”
by Joshua D. Reiss and Øyvind Brandtsegg
https://www.frontiersin.org/articles/10.3389/fdigh.2018.00017/full

At the ICLI conference (International Conference on Live Interfaces) in Porto, we presented a paper: “Instrumentality, perception and listening in crossadaptive performance” by Marije Baalman, Simon Emmerson and Øyvind Brandtsegg
http://www.liveinterfaces.org/

Brandtsegg present the project at the International Grieg Research School symposium “Knowing Music -Musical Knowing: Cross disciplinary dialogue on epistemologies” in Trondheim, October 2018.
https://www.uib.no/en/rs/grieg/116427/knowing-music-musical-knowing-cross-disciplinary-dialogue-epistemologies

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/various-presentations-and-papers-2018/feed/ 0 1262
Concert presentation at the Artistic Research Forum in Bergen http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/concert-presentation-at-the-artistic-research-forum-in-bergen/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/concert-presentation-at-the-artistic-research-forum-in-bergen/#respond Thu, 11 Oct 2018 11:34:25 +0000 http://crossadaptive.hf.ntnu.no/?p=1256 Continue reading "Concert presentation at the Artistic Research Forum in Bergen"]]> We presented the project during the Artistic Reaearch Forum in Bergen on September 24th 2018. The presentation consisted of a short introduction of the concept, tools and methodologies followed by a 20 minute musical performance showing several of the crossadaptive techniques in an improvised context.  The performance was followed by a commentary by Diemo Schwarz and concluded with a plenary discussion.

Crossadaptive performance at ARF in Bergen. Left to right: Oeyvind Brandtsegg, Carl Haakon Waadeland, Trond Engum and Tone Åse. Photo by Stian Westerhus.

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/concert-presentation-at-the-artistic-research-forum-in-bergen/feed/ 0 1256
Concert at Dokkhuset, May 2018 http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/studio-and-rehearsals-before-final-concert/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/studio-and-rehearsals-before-final-concert/#comments Thu, 11 Oct 2018 11:23:09 +0000 http://crossadaptive.hf.ntnu.no/?p=1252 Continue reading "Concert at Dokkhuset, May 2018"]]> The project held a concert at Dokkhuset on May 26th. This concert was made as a presentation of the artistic outcome,  towards the end of the project. Assuming there is no “final result” of an artistic process, but still representing an image of where we got to during this process. To give some insight into different types of outcome, I decided to have three different ensembles, and also to use one presentation of a student production.

The program for the evening was:

Michael Duch – double bass
Oeyvind Brandtsegg – live convolver

Kim Henry Ortveit – electronics and percussion
Maja Ratkje – vocals, electronics
Oeyvind Brandtsegg – crossadaptive processing, Marimba Lumina

Ada Mathea Hoel – presentation of crossadaptive video and audio production

Trond Engum – crossdaptive processing, guitar
Carl Haakon Waadeland – percussion
Tone Åse – vocals, electronics
Oeyvind Brandtsegg – crossadaptive processing, Marimba Lumina

The whole event was recorded, and the video can be found here:
https://vimeo.com/292993129/589519efcb
Since this was a local concert, the verbal presentation is in Norwegian.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/studio-and-rehearsals-before-final-concert/feed/ 1 1252
Crossadaptive seminar Trondheim, November 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/#respond Sun, 05 Nov 2017 18:27:58 +0000 http://crossadaptive.hf.ntnu.no/?p=1093 Continue reading "Crossadaptive seminar Trondheim, November 2017"]]> As  part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to more detailed documentation of each session as it becomes available. Each part will be added as the documentations is ready, so if something is missing now, do check back later. The seminar was also streamed here, and the recorded streams will be archived.

In addition to the researchers presenting, we also had an audience of students from the music technology and the jazz departments, as well as other researchers and teachers from NTNU. We are grateful for the input from the audience to enrich our discussions.

Program:

Thursday 2. November

Practical experiments

Introduction and status. [slides]
Øyvind Brandtsegg

 

Performance
Maja S.K. Ratkje, Øyvind Brandtsegg, Miller Puckette (standin for Stian Westerhus)

 

Work methods and session reports. Experiences, insights, reflections.
Trond Engum (with team)[slides],  Bernt Isak Wærstad (with team) [slides]

Instruments and tools

Instrumental affordances, crossadaptivity as instrumental gesture.
Marije Baalman [slides]

 


Performance
Tone Åse, Carl Haakon Waadeland, Trond Engum

 

Instrument design and technological developments. [slides]
Sigurd Saue, Victor Lazzarini, Øyvind Brandtsegg

 

Friday 3. November

Reflection. Aesthetic and philosophical issues

Documentation methods [slides]
Andreas Bergsland


Performance
Bjørnar Habbestad, Gyrid N. Kaldestad, Bernt Isak Wærstad

 

What does it mean for the performer, for the audience, for the music? How does it change the game?
Solveig Bøe [notes], Simon Emmerson [slides]

Wider use and perspectives

Experiences with Philippe Manoury and Juliana Snapper, thoughts on instrumental control, and a performance
Miller Puckette [PD patches]
(with Øyvind Brandtsegg for a brief liveconvolver performance)

 

Looking at the music from the mix perspective. Viability of effects as expression. The wider field of automatic mixing and adaptive effects.
Gary Bromham [slides], Josh Reiss [slides]

 

Outcomes and evaluation. [slides]
Moderator: Øyvind Brandtsegg

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/feed/ 0 1093
Liveconvolver experiences, San Diego http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/#respond Wed, 07 Jun 2017 20:25:55 +0000 http://crossadaptive.hf.ntnu.no/?p=868 Continue reading "Liveconvolver experiences, San Diego"]]> The liveconvolver has been used in several concerts and sessions in San Diego this spring. I played three concerts with the group Phantom Station (The Loft, Jan 30th, Feb 27th and Mar 27th), where the first involved the liveconvolver. Then one concert with the band Creatures (The Loft, April 11th), where the live convolver was used with contact mikes on the drums, and live sampling IR from the vocals. I also played a duo concert with Kjell Nordeson at Bread and Salt April 13th, where the liveconvolver was used with contact mikes on the drums, live sampling IR from my own vocals. Then a duo concert with Kyle Motl at Coaxial in L.A (April 21st), where a combination of crossadaptive modulation and live convolver was used. For the duo with Kyle, I switched between using bass as the IR and vocals as the IR, letting the other instrument play through the convolver. A number of studio sessions was also conducted, with Kjell Nordeson, Kyle Motl, Jordan Morton, Miller Puckette, Mark Dresser, and Steven Leffue. A separate report on the studio sesssion in UCSD Studio A will be published later.

“Phantom Station”, The Loft, SD

This group is based on Butch Morris’ conduction language for improvisation, and the performance typically requires a specific action (specific although it is free and open) to happen on cue from the conductor. I was invited into this ensemble and encouraged to use whatever part of my instrumentarium that I might see fit. Since I had just finished the liveconvoolver plugin, I wanted to try that out. I also figured my live processing techniques would fit easily, in case the liveconvolver did not work so well. Both the live processing and the live convolution instruments was in practice less than optimal for this kind of ensemble playing. Even though the instrumental response can be fast (low latency), the way I normally use these instruments is not for making a musical statements quickly for one second and then suddenly stop again. This leads me to reflect on a quality measure I haven’t really thought of before. For lack of a better word, let’s call it reactive inertia: the possibility to completely change direction on the basis of some external and unexpected signal. This is something else than the audio latency (of the audio processing) and also something else than the user interface latency (like for example, the time it takes the performer to figure out which button to turn to achieve a desired effect). I think it has to do with the sound production process, for example how some effects take time to build up before they are heard as a distinct musical statement, and also the inertia due to interaction between humans and also the signal chain of sound production pre effects (say if you live sample or live process someone, need to get a sample, or need to get some exciter signal). For live interaction instruments, the reactive inertia is then goverened by the time it takes two performers to react to the external stimuli, and their combined efforts to be turned into sound by the technology involved. Much like what an old man once told me at Ocean Beach:

“There’s two things that needs to be ready for you to catch a wave
– You, …and the wave”.

We can of course prepare for sudden shifts in the music, and construct instruments that will be able to produce sudden shifts and fast reactions. Still, the reaction to a completely unexpected or unfamiliar stimuli will be slower than optimal. An acoustic instrument has less of these limitations. For this reason, I switched to using the Marimba Lumina for the remaning two concerts with Phantom Station, to be able to shape immediate short musical statements with more ease.

Phantom Station

“Creatures”, The Loft, SD

Creatures is the duo of Jordan Morton (double bass, vocals) and Kai Basanta (drums). I had the pleasure of sitting in with them, making it a trio for this concert at The Loft. Creatures have some composed material in the form of songs, and combine this with long improvised stretches. For this concert I got to explore the liveconvolver quite a bit, in addition to the regular live processing and Marimba Lumina. The convolver was used with input from piezo pickups on the drums, convolving with IR live recorded from vocals. Piezo pickups can be very “impulse-like”, especially when used on percussive instruments. The pickups’ response have a generous amount of high frequencies, and a very high dynamic range. Due to the peaky impulse-like nature of the signal, it drives the convolution almost like a sample playback trigger, creating delay patterns on the input sound. Still the convolver output can become sustained and dense, when there is high activity on the triggering input. In the live mix, the result sounds somewhat similar to infinite reverb or “freeze” effects (using a trigger to capture a timbral snippet and holding that sound as long as the trigger is enabled). Here, the capture would be the IR recording, and the trigger to create and sustain the output is the activity on the piezo pickup. The causality and performer interface is very different than that of a freeze effect, but listening to it from the outside, the result is similar. These expressive limitations can be circumvented by changing the miking technique, and working in a more directed way as to what sounds goes into the convolver. Due to the relatively few control parameters, the main thing deciding how the convolver sounds is the input signals. The term causality in this context was used by Miller Puckette when talking about the relationship between performative actions and instrumental reactions.

Creatures + Brandtsegg
CreaturesTheLoft_mix1_mstr
 CreaturesTheLoft_mix1_mstr

Creatures at The Loft. A liveconvolver example can be found at 29:00 to 34:00 with Vocal IR, and briefly around 35:30 with IR from double bass.

“Nordeson/Brandtsegg duo”, Bread & Salt, SD

Duo configuration, where Kjell plays drums/perc and vibraphone, and I did live convolution, live processing and Marimba Lumina. My techniques was much like what I used with Creatures. The live convolver setup was also similar, with IR being live sampled from my vocals and the convolver being triggered by piezo pickups on Kjell’s drums. I had the opportunity to work over a longer period of time preparing for this concert together with Kjell. Because if this, we managed to develop a somewhat more nuanced utilization of the convolver techniques. Still, in the live performance situation on a PA, the technical situation made it a bit more difficult to utilize the fine grained control over the process and I felt the sounding result was similar in function to what I did together with Creatures. It works well like this, but there is potential for getting a lot more variation out of this technique.

Nordeson and Brandtsegg setup at Bread and Salt

We used a quadrophonic PA setup for this concert. Due to an error with the front-of-house patching, only 2 of the 4 lines from my electronics was recorded. Due to this fact, the mix is somewhat off balance. The recording also lacks first part of the concert, starting some 25 minutes into it.

NordesonBrandtsegg_mix1_mstr
 NordesonBrandtsegg_mix1_mstr

“The Gringo and the Desert”, Coaxial, LA

In this duo Kyle Motl plays double bass and I do vocals, live convolution, live processing, and also crossadaptive processing. I did not use the Marimba Lumina in this setting, so some more focus was allowed for the processing. In terms of crossadaptive processing, the utilization of the techniques is a bit more developed in this configuration. We’ve had the opportunity to work over several months, with dedicated rehearsal sessions focusing on separate aspects of the techniques we wanted to explore. As it happpened during the concert, we played one long set and the different techniques was enabled as needed. Parameters that was manually controlled in parts of the set, was then delegated to crossadaptive modulations in other parts of the set. The live convolver was used freely as one out of several active live processing modules/voices. The liveconvolver with vocal IR can be heard for example from 16:25 to 20:10. Here, the IR is recorded from vocals, and the process acts as a vocal “shade” or “pad”, creating long sustained sheets of vocal sound triggeered by the double bass. Then, liveconvolver with bass IR from 20:10 to 23:15, where we switch on to full crossadaptive modulation until the end of the set. We used a complex mapping designed to respond to a variety of expressive changes. Our attitude/approach as performers was not to intellectually focus on controlling specific dimensions but to allow the adaptive processing to naturally follow whatever happened in the music.

Gringo and the Desert soundcheck at Coaxial, L.A

coaxial_Kyle_Oeyvind_mix2_mstr
 coaxial_Kyle_Oeyvind_mix2_mstr

Gringo and the Desert at Coaxial DTLA, …yes the backgorund noise is the crickets outside.

Session with Steven Leffue (Apr 28th, May 5th)

I did two rehearsal sessions together with Steven Leffue in April, as preparation for the UCSD Studio A session in May. We worked both on crossadaptive modulations and on live convolution. Especially interesting with Steven is his own use of adaptive and crossadaptive techniques. He has developed a setup in PD, where he tracks transient density and amplitude envelope over different time windows, and also uses standard deviation of transient density within these windows. The windowing and statistics he use can act somewhat like a feature we have also discussed in our crossadaptive project: a method of making an analysis “in relation to the normal level” for a given feature. Thus, a way to track relative change. Steven’s Master thesis “Musical Considerations in Interactive Design for Performance” relates to this and other issues of adaptive live performance. Notable is also his ICMC paper “AIIS: An Intelligent Improvisational System”. His music can be heard at http://www.stevenleffue.com/music.html, where the adaptive electronics is featured in “A theory of harmony” and “Futures”.
Our first session was mainly devoted to testing and calibrating the analysis methods towards use on the saxophone. In very broad terms, we notice that the different analysis streams now seem to work relatively similar on different instruments. The main diffferences are related to extraction of tone/noise balance, general brightness, timbral “pressedness” (weight of formants), and to some extent in transient detection and pitch tracking. The reason why the analysis methods now appear more robust is partly due to refinements in their implementation, and partly due to (more) experience in using them as modulators. Listening, experimentation, tweaking, and plainly just a lot of spending-time-with-them, have made for a more intuitive understanding of how each analysis dimension relates to an audio signal.
The second session was spent exploring live convolution between Sax and Vocals. Of particular interest here is the comments from Steven regarding the performative roles of IR recording vs playing the convolver. Steven states quite strongly that the one recording the IR has the most influence over the resulting music. This impression is consistent when he records the IR (and I sing through it), and when I record the IR and he plays through it. This may be caused by several things, but of special interest is that it is diametrically opposed to what many other performers have stated. Both Kyle, Jordan and Kjell in our initial sessions, voiced a higher performative intimacy, a closer connection to the output when playing through the IR. Maybe Steven is more concerned with the resulting timbre (including processed sound) than the physical control mechanism, as he routinely designs and performs with his own interactive electronics rig. Of course all musician care about the sound, but perhaps there is a difference of approach on just how to get there. With the liveconvolver we put the performers in an unfamiliar situation, and the differences in approach might just show different methods of problems solving to gain control over this situation. What I’m trying to investigate is how the liveconvolver technique works performatively, and in this, the performer’s personal and musical traits plays into the situation quite strongly. Again, we can only observe single occurences and try to extract things that might work well. There is no conclusions to be drawn on a general basis as to what works and what does not, and neither can we conclude what is the nature of this situation and this tool. One way of looking at it (I’m still just guessing) is that Steven treats the convolver as *the environment* in which music can be made. The changes to the environment determines what can be played and how that will sound, and thus, the one recording the IR controls the environment and subsequently controls the most important factor in determining the music.
In this session, we also experimented a bit with transposed and revesed IR, this being some of the parametric modifications we can make to the IR with our liveconvolver technique. Transposing can be interesting, but also quite difficult to use musically. Transposing in octave intervals can work nicely, as it will act just as much as a timbral colouring without changing pitch class. A fun fact about reversed IR as used by Steven: If he played in the style of Charlie Parker and we reversed the IR, it would sound like Evan Parker. Then, if he played like Evan Parker and we reversed the IR, it would still sound like Evan Parker. One could say this puts Evan Parker at the top of the convolution-evolutionary-saxophone tree….

Steven Leffue

2017_05_StevenOyvLiveconv_VocIR_mix3_mstr
 2017_05_StevenOyvLiveconv_VocIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, IR recorded by vocals.

2017_05_StevenOyvLiveconv_SaxIR_mix3_mstr
 2017_05_StevenOyvLiveconv_SaxIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, IR recorded by Sax.

2017_05_StevenOyvLiveconv_reverseSaxIR_mix3_mstr
 2017_05_StevenOyvLiveconv_reverseSaxIR_mix3_mstr

Liveconvolver experiment Sax/Vocals, time reversed IR recorded by Sax.

 

Session with Miller Puckette, May 8th

The session was intended as “calibration run”, to see how the analysis methods responded to Miller’s guitar. This as a preparation for the upcoming bigger session in UCSD Studio A. The main objective was to determine which analysis features would work best as expressive dimensions, find the appropriate ranges, and start looking at potentially useful mappings. After this, we went on to explore the liveconvolver with vocals and guitar as the input signals. Due to the “calibration run” mode of approach, the session was not videotaped. Our comments and discussion was only dimly picked up by the microphones used for processing. Here’s a transcription of some of Millers initial comments on playing with the convolver:

“It is interesting, that …you can control aspects of it but never really control the thing. The person who’s doing the recording is a little bit less on the hook. Because there’s always more of a delay between when you make something and when you hear it coming out [when recording the IR]. The person who is triggering the result is really much more exposed, because that person is in control of the timing. Even though the other person is of course in control of the sonic material and the interior rhythms that happen.”

Since the liveconvolver has been developed and investigated as part of the research on crossadaptive techniques, I had slipped into the habit of calling it a crossadaptive technique. In discussion with Miller, he pointed out that the liveconvolver is not really *crossadaptive* as such. BUT it involves some of the same performative challenges, namely playing something that is not played solely for the purpose of it’s own musical value. The performers will sometimes need to play something that will affect the sound of the other musician in some way. One of the challenges is how to incorporate that thing into the musical narrative, taking care of how it sounds in itself, and exactly how it will affect the other performer’s sound. Playing with liveconvolver has this performative challenge, as has the regular crossadaptive modulation. One thing the live convolver does not have is the reciprocal/two-way modulation, it is more of a one-way process. The recent Oslo session on liveconvolution used two liveconvolvers simultaneously to re-introduce the two-way reciprocal dependency.

Miller Puckette

2017_05_liveconv_OyvMiller1_M_IR
 2017_05_liveconv_OyvMiller1_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller2_M_IR
 2017_05_liveconv_OyvMiller2_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller3_M_IR
 2017_05_liveconv_OyvMiller3_M_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by guitar.

2017_05_liveconv_OyvMiller4_O_IR
 2017_05_liveconv_OyvMiller4_O_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by vocals.

2017_05_liveconv_OyvMiller5_O_IR
 2017_05_liveconv_OyvMiller5_O_IR

Liveconvolver experiment Guitar/Vocals, IR recorded by vocals.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/feed/ 0 868
Concerts and presentations, fall 2016 http://crossadaptive.hf.ntnu.no/index.php/2016/12/15/concerts-and-presentations-fall-2016/ http://crossadaptive.hf.ntnu.no/index.php/2016/12/15/concerts-and-presentations-fall-2016/#respond Thu, 15 Dec 2016 20:53:38 +0000 http://crossadaptive.hf.ntnu.no/?p=625 Continue reading "Concerts and presentations, fall 2016"]]> A number of concerts, presentations and workshops were given during October and November 2016. We could call it the 2016 Crossadaptive Transatlantic tour, but we won’t. This post gives a brief overview.

Concerts in Trondheim and Göteborg

BRAK/RUG was scheduled for a concert (with a preceding lecture/presentation) at Rockheim, Trondheim on 21. October. Unfortunately, our drummer Siv became ill and could not play. At 5 in the afternoon (concert start at 7) we called Ola Djupvik to ask if he could sit in with us. Ola has experience from playing in a musical setting with live processing and crossadaptive processing, for example the Session 20. – 21 September,  and also from performing with music technology students Ada Mathea Hoel, Øystein Marker and others. We were very happy and grateful for his courage to step in on such short notice. Here’s and excerpt from the presentation that night, showing vocal pitch controlling reverb on the drums (high pitch means smaller reverb size), transient density on the drums controlling delay feedback on the vocals (faster playing means less feedback).

There is a significant amount of crossbleed between vocals and drums, so the crossadaptivity is quite flaky. We still have some work to do on source separation to make this work well when playing live with a PA system.

Crossadaptive demo at Rockheim
mix_maja_ola_cross_rockheim_ptmstr
Crossadaptive demo at Rockheim mix_maja_ola_cross_rockheim_ptmstr

 

Thanks to Tor Breivik for recording the Rockheim event. The clip here shows only the crossadaptive demonstration. The full concert is available on Soundcloud

brak_trio_rockheim1
Brandtsegg, Ratkje, Djupvik trio at Rockheim

The day after the Trondheim concert, we played at the Göteborg Art Sounds festival. Now, Siv was feeling better and was able to play. Very nice venue at Stora Teatern. This show was not recorded.

And then we take… the US

The crossadaptive project was presented  at the Transatalantic Forum in Chicago on October 24, in a special session titled “Sensational Design: Space, Media, and the Senses”. Both Sigurd Saue, Trond Engum and myself (Øyvind Brandtsegg) took part in the presentation, showing the many-faceted aspects of our work. Being a team of three people also helped the networking effort that is naturally a part of such a forum. During our stay in Chicago, we also visited the School of the Art Institute of Chicago, meeting Nicolas Collins, Shawn Decker, Lou Mallozzi, and Bob Snyder to start working on exchange programs for both students and faculty. Later in the week, Brandtsegg did a presentation of the crossadaptive project during a SAIC class on audio projects.

sigurd_and_bob
Sigurd Saue and Bob Snyder at SAIC

After Chicago, Engum and Saue went to Trondheim, while I traveled further on to San Francisco, Los Angeles, Santa Barabara, and then finally to San Diego.
In the Bay area, after jamming with Joel Davel in Paul Dresher’s studio, and playing a concert with Matt Ingalls and Ken Ueno at Tom’s Place, I presented the crossadaptive project at CCRMA, Stanford University on November 2.  The presentations seemed well received and spurred a long discussion where we touched on the use of MFCC’s, ratios and critical bands, stabilizing of the peaks of rhythmic autocorrelation, the difference of the correlation between two inputs (to get to the details of each signal), and more. Getting the opportunity to discuss audio analysis with this crowd was a treat.  I also got the opportunity to go back the day after to look at student projects, which I find gives a nice feel of the vibe of the institution. There is a video of the presentation here

After Stanford, I also did a presentation at the beautiful CNMAT at UC Berkeley, with Ed Campion, Rama Gottfried, a group of enthusiastic students. There I also met colleague P.A. Nilsson from Göteborg, as he was on a residency there. P.A.’s current focus on technology to intervene and structure improvisations is closely related to some of the implications of our project.

cnmat
CNMAT, UC Berkeley

On November 7 and 8, I did workshops at California Institute of the Arts, invited by Amy Knoles. In addition to presenting the technologies involved, we did practical studies where the students played in processed settings and experienced the musical potential and also the different considerations involved in this kind of performance.

calarts_workshop
Calarts workshops

Clint Dodson and Øyvind Brandtsegg experimenting together at CalArts

At UC Santa Barbara, I did a presentation in Studio Xenakis on November 9. There, I met with Curtis Roads, Andres Cabrera, and a broad range of their colleagues and students. With regards to the listening to crossadaptive performances, Curtis Roads made a precise observation that it is relatively easy to follow if one knows the mappings, but it could be hard to decode the mapping just by listening to the results. In Santa Barbara I also got to meet Owen Campbell, who did a master thesis on crossadaptive and got insight into his research and software solutions. His work on ADEPT was also presented at the AES workshop on intelligent music production at Queen Mary University this September, where Owen also met our student Iver Jordal, presenting his research on artificial intelligence in crossadaptive processing.

San Diego

Back in San Diego, I did a combined presentation and concert for the computer music forum on November 17.  I had the pleasure of playing together with Kyle Motl on double bass for this performance.

sd_kyle_and_oyvind
Kyle Motl and Øyvind Brandtsegg, UC San Diego

We demonstrated both live processing and crossadaptive processing between voice and bass.  There was a rich discussion with the audience. We touched on issues of learning (one by one parameter, or learning a combined and complex parameter set like one would do on an acoustic instrument), etudes, inverted mapping sometimes being more musically intuitive, how this can make a musician pay more attention to each other than to self (frustrating or liberating?), and tuning of the range and shape of parameter mappings (still seems to be a bit on/off sometimes, with relatively low resolution in the middle range).

First we did an example of a simple mapping:
Vocal amplitude reduce reverb size for bass,
Bass amplitude reduce delay feedback on vocals

Kyle and Oeyvind Simple
mix_sd_nov_17_3_cross_ptmstr
Kyle and Oeyvind Simple mix_sd_nov_17_3_cross_ptmstr

 

Then a more complex example:
Vocal transient density -> Bass filter frequency  of a lowpass filter
Vocal pitch -> Bass delay filter frequency
Vocal percussive -> Bass delay feedback
Bass transient density -> Vocal reverb size (less)
Bass pitch+centroid -> Vocal tremolo speed
Bass noisiness -> Vocal tremolo grain size (less)

K&O complex mapping
mix_sd_nov_17_4_cross_ptmstr
K&O complex mapping mix_sd_nov_17_4_cross_ptmstr

 

We also demonstrated another and more direct kind of crossadaptive processing, when doing convolution with live sampled impulse response. Oeyvind manually controlled the IR live sampling of sections from Kylse’s playing, and also triggered the convolver with tapping and scratching on a small wooden box with a piezo microphone. The wooden box source is not heard directly in the recording, but the resulting convolution is. No other processing is done, just the convolution process.

K&O, convolution
mix_sd_nov_17_2b_conv_ptmstr
K&O, convolution mix_sd_nov_17_2b_conv_ptmstr

 

We also played a longer track of regular live processing this evening. This track is available on Soundcloud

Thanks to UCSD and recording engineers Kevin Dibella and James Forest Reid for recording the Nov 17 event.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2016/12/15/concerts-and-presentations-fall-2016/feed/ 0 625