Oeyvind – Cross adaptive processing as musical intervention http://crossadaptive.hf.ntnu.no Exploring radically new modes of musical interaction in live performance Tue, 27 Nov 2018 13:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 116975052 Documentation and the speed of an artistic workflow http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/documentation-and-the-speed-of-an-artistic-workflow/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/documentation-and-the-speed-of-an-artistic-workflow/#respond Thu, 11 Oct 2018 13:34:39 +0000 http://crossadaptive.hf.ntnu.no/?p=1266 Continue reading "Documentation and the speed of an artistic workflow"]]> During our preparations for the concert at Dokkhuset in May 2018, we had several sessions of  combined rehearsal and studio recording. We aimed for doing a recording of the concert and using that for a public release, but wanted to record the preparations too, in case we needed additional (or backup) material. We ended up using a combinations of the live and studio recordings for the upcoming release, so in that respect the strategy worked as planned.

Since we already had a substantial documentation of practical experiments from earlier sessions, we decided to not document these sessions with video and written reflections. One could of course say that the sessions are still documented with audio recordings, although I would characterize these different modes of documentation so differently that they require different treatment and impacts the production and the reflection if very different ways. We as musicians are quite used to working in the studio, used to being recorded, and familiar with the scrutiny that follows from this process. Close miking in an optimal recording environment brings out details that you would not necessarily notice otherwise.  The impact of the difference in documentation methods and documentation purpose is the aim of this blog post.

4-camera video documentation

During the practical sessions earlier in this project, we had done what I would call a full video documentation. This consisted of a written session report form (with objectives, listing of participants and takes, reflections during and after the session), a 4-camera video recording of the full session, with reflection interviews after the session, a video digest with highlights of the raw video takes, and a blog post summing up and presenting the results. This recipe was developed as a collaboration between Andreas Bergsland and myself, with Bergsland leading the interviews and taking care of all parts of the video production. A documentation software tool (docmarker) was also developed by me, to aid in marking significant moments in time during sessions, with the purpose of speeding up the process of sifting through the material when editing.

Docmarker tool

Such an extensive documentation format of course requires substantial resources, but care was taken to try to ensure that the artistic process should not be slowed down by documentation work. We had dedicated personell doing the documentation, for most sessions Bergsland would transparently take care of the video production. The reflective interviews were done after all performing activities were completed, and we also had considerable technical assistance from studio engineers (led by Thomas Henriksen). All this should ensure that any burden of documentation should not impose any hindrance to the workflow of the artistic process being documented.

Still, when we did these final rehearsals in May 2018 without this kind of documentation, I realized that the process and the workflow was much faster. It seems for me like the act of being documented influence the flow of the process in a very significant manner. I relate this to a difference between a pure artistic process, and an artistic research process. I do not propose that what I say here has any form of general validity, but I do recognize it within my own experience from this project and from earlier artistic research projects. It appears to be related to being aware of one’s own intentions on an intellectual level, as opposed to “just doing”. During my doctoral research project “New creative possibilities through improvisational use of compositional techniques, – a new computer instrument for the performing musician” finished in 2008, I also noted this difficulty of doing research on my own process. The being inside while looking from the outside.  The intuitive and the analytic. The subjective and the attempt at being objective. During an earlier session in the crossadaptive project, Bergsland asked me right after a take what I had done during that take. I replied that I did exactly what I said I should (before the take). He then noted that it did not sound as if this was what happened, and I needed to check my settings and realized, yes, … I had actually done some adjustments early in the take “just to make it work”.

After this insight in the May sessions, I have discussed this issue with several collagues. Many seem to agree with me that it would be interesting to try to document the moment when an artistic process ignites, the moment when “it happens”. Many also recognize the inevitable slowdown due to an analytical component being introduced into the inner loop of the process. Others would say that I must be doing something wrong if the act of documentation slows me down so much. The way I experience this, the artistic process has several layers, each with it’s own iteration time. Parts of the process iterates on the time scale of years, where reflective insight and slow learning influence the growth of ideas. Other intermediary layers also exist as onion skin layers before we get to the inner loop that is active during performance. Here, the millisecond interactions between impulses and reactions is at the most sensitive to interruptions and distractions. Any kind of small drag at this level impedes the flow and slows down the loop, sometimes to the extent that it will not work, it stops. As if it was a rotating wheel with a self-preserving spin, delicately balancing between friction and flow. On the longer time scales, documentation does not imede the process, but rather it can enrich it. On those time scales I also recognize the accumulation of material, of background research, technical solutions, philosophical perspectives and so on. It may seem like the gunpower to allow something to ignite is collected during those longer loops, while the spark… well that is a fickle and short-lived entity more vulnerable to analytical impact. In quantum physics there are also (as far as I know) particles too small to be observed, where the act of observation collapses a potentiality. Maybe there is a similar uncertainty principle in artistic research?

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/documentation-and-the-speed-of-an-artistic-workflow/feed/ 0 1266
Various presentations and papers 2018 http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/various-presentations-and-papers-2018/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/various-presentations-and-papers-2018/#respond Thu, 11 Oct 2018 12:02:30 +0000 http://crossadaptive.hf.ntnu.no/?p=1262 Continue reading "Various presentations and papers 2018"]]> The crossadaptive project was presented at several conferences in 2018, and several written publications were made.

At the NIME conference (New Instruments for Musical Expression) at Virginia Tech, we presented a paper: “Working Methods and Instrument Design for Cross-Adaptive Sessions” by Oeyvind Brandtsegg, Trond Engum & Bernt Isak Wærstad
http://nime2018.icat.vt.edu/

We published a paper on the developments of new convolution techniqiues in Applied Sciences: “Live Convolution with Time-Varying Filters”
by Øyvind Brandtsegg, Sigurd Saue and Victor Lazzarini.
https://www.mdpi.com/2076-3417/8/1/103

Brandtsegg presented the project at the Forum IRCAM Workshops in March 2018.
http://forumnet.ircam.fr/

Brandtsegg and Engum presented the project at the European Platform for Artistic research in Music 2018, in Porto.
https://www.aec-music.eu/events/european-platform-for-artistic-research-in-music-2018

We published a paper for Frontiers in Digital Humanities: “Applications of Cross-Adaptive Audio Effects: Automatic Mixing, Live Performance and Everything in Between”
by Joshua D. Reiss and Øyvind Brandtsegg
https://www.frontiersin.org/articles/10.3389/fdigh.2018.00017/full

At the ICLI conference (International Conference on Live Interfaces) in Porto, we presented a paper: “Instrumentality, perception and listening in crossadaptive performance” by Marije Baalman, Simon Emmerson and Øyvind Brandtsegg
http://www.liveinterfaces.org/

Brandtsegg present the project at the International Grieg Research School symposium “Knowing Music -Musical Knowing: Cross disciplinary dialogue on epistemologies” in Trondheim, October 2018.
https://www.uib.no/en/rs/grieg/116427/knowing-music-musical-knowing-cross-disciplinary-dialogue-epistemologies

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/various-presentations-and-papers-2018/feed/ 0 1262
Master thesis at UCSD (Jordan Morton) http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/master-thesis-at-ucsd-jordan-morton/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/master-thesis-at-ucsd-jordan-morton/#respond Thu, 11 Oct 2018 11:44:45 +0000 http://crossadaptive.hf.ntnu.no/?p=1260 Continue reading "Master thesis at UCSD (Jordan Morton)"]]> Jordan Morton write about the experiences within the crossadaptive project in her master thesis at the University of California San Diego. The title of the thesis is “Composing A Creative Practice: Collaborative Methodologies and Sonic Self-Inquiry In The Expansion Of Form Through Song”. Jordan was one of the performers that took part in several sessions, experiments and productions. Her master thesis give a valuable report of how the collaboration was experienced by her, and how it contributed to some new directions of inquiry within the practical part of her master.

Full text available here:
https://escholarship.org/uc/item/0641v1r0

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/master-thesis-at-ucsd-jordan-morton/feed/ 0 1260
Concert presentation at the Artistic Research Forum in Bergen http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/concert-presentation-at-the-artistic-research-forum-in-bergen/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/concert-presentation-at-the-artistic-research-forum-in-bergen/#respond Thu, 11 Oct 2018 11:34:25 +0000 http://crossadaptive.hf.ntnu.no/?p=1256 Continue reading "Concert presentation at the Artistic Research Forum in Bergen"]]> We presented the project during the Artistic Reaearch Forum in Bergen on September 24th 2018. The presentation consisted of a short introduction of the concept, tools and methodologies followed by a 20 minute musical performance showing several of the crossadaptive techniques in an improvised context.  The performance was followed by a commentary by Diemo Schwarz and concluded with a plenary discussion.

Crossadaptive performance at ARF in Bergen. Left to right: Oeyvind Brandtsegg, Carl Haakon Waadeland, Trond Engum and Tone Åse. Photo by Stian Westerhus.

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/concert-presentation-at-the-artistic-research-forum-in-bergen/feed/ 0 1256
Concert at Dokkhuset, May 2018 http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/studio-and-rehearsals-before-final-concert/ http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/studio-and-rehearsals-before-final-concert/#comments Thu, 11 Oct 2018 11:23:09 +0000 http://crossadaptive.hf.ntnu.no/?p=1252 Continue reading "Concert at Dokkhuset, May 2018"]]> The project held a concert at Dokkhuset on May 26th. This concert was made as a presentation of the artistic outcome,  towards the end of the project. Assuming there is no “final result” of an artistic process, but still representing an image of where we got to during this process. To give some insight into different types of outcome, I decided to have three different ensembles, and also to use one presentation of a student production.

The program for the evening was:

Michael Duch – double bass
Oeyvind Brandtsegg – live convolver

Kim Henry Ortveit – electronics and percussion
Maja Ratkje – vocals, electronics
Oeyvind Brandtsegg – crossadaptive processing, Marimba Lumina

Ada Mathea Hoel – presentation of crossadaptive video and audio production

Trond Engum – crossdaptive processing, guitar
Carl Haakon Waadeland – percussion
Tone Åse – vocals, electronics
Oeyvind Brandtsegg – crossadaptive processing, Marimba Lumina

The whole event was recorded, and the video can be found here:
https://vimeo.com/292993129/589519efcb
Since this was a local concert, the verbal presentation is in Norwegian.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/10/11/studio-and-rehearsals-before-final-concert/feed/ 1 1252
Session with Kim Henry Ortveit http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/#respond Thu, 22 Feb 2018 10:12:12 +0000 http://crossadaptive.hf.ntnu.no/?p=1192 Continue reading "Session with Kim Henry Ortveit"]]> Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a Seaboard) and some drum pads. Modulations and patterns played on one part of the instrument will determine how other components of the instrument actually sound out. This is combined with some intricate layering, looping, and quantization techniques that allows shaping of the musical continuum in novel ways. Since the instrument in itself is crossadaptive between its consituent parts (and simply also because we think it sounds great), we wanted to experiment with it within the crossadaptive project too.


Kim Henry’s instrument

The session was done as an interplay between Kim Henry on his instrument and Øyvind Brandtsegg on vocals and crossadaptive processing. It was conducted as an initial exploration of just “seeing what would happen” and trying out various ways of making crossadaptive connections here and there. The two audio examples here are excerpts of longer takes.

Take 1, Kim Henry Ortveit and Øyvind Brandtsegg

 

2018_02_Kim_take2
 2018_02_Kim_take2

Take 2, Kim Henry Ortveit and Øyvind Brandtsegg

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-kim-henry-ortveit/feed/ 0 1192
Session with Michael Duch http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/#respond Thu, 22 Feb 2018 10:11:02 +0000 http://crossadaptive.hf.ntnu.no/?p=1190 Continue reading "Session with Michael Duch"]]> February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is a step back in complexity from the crossadaptive interplay, but is interesting for two reasons: One is to check how useful our techniques of modulation is in a setting with more traditional performer control. Where there is only one performer modulating himself, there is a closer relationship between performer intention and timbral result. And two: the reason to do this specifically with Michael is that we know from his work with Lemur and other settings that he intently and intimately relates to the performance environment, the resonances of the room and the general ambience. Due to this focus, we also wanted to use live convolution techniques where he first records an impulse response and then himself play through the same filter. This exposed one feature needed in the live convolver, where one might want to delay the activation of the new impulse response until its recording is complete (otherwise we most certainly will get extreme resonances while recording, since the filter and the exitation is very similar). That technical issue aside, it was musically very interesting to hear how he explored resonances in his own instrument, and also used small amounts of detuning to approach beating effects in the relation between filter and exitation signal. The self-convolution also exposes parts of the instrument spectrum that usually is not so noticeable, like bassy components of high notes, or prominent harmonics that otherwise would be perceptually masked by their merging into the full tone of the instrument.

 

2018_02_Michael_test1
 2018_02_Michael_test1

Take 1,  autoadaptive exploration

2018_02_Michael_test2
 2018_02_Michael_test2

Take 2,  autoadaptive exploration

Self convolution

2018_02_Michael_conv1
 2018_02_Michael_conv1

Self-convolution take 1

2018_02_Michael_conv2
 2018_02_Michael_conv2

Self-convolution take 2

2018_02_Michael_conv3
 2018_02_Michael_conv3

Self-convolution take 3

2018_02_Michael_conv4
 2018_02_Michael_conv4

Self-convolution take 4

2018_02_Michael_conv5
 2018_02_Michael_conv5

Self-convolution take 5

2018_02_Michael_conv6
 2018_02_Michael_conv6

Self-convolution take 6

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/feed/ 0 1190
Session with David Moss in Berlin http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/#respond Fri, 02 Feb 2018 10:37:09 +0000 http://crossadaptive.hf.ntnu.no/?p=1170 Continue reading "Session with David Moss in Berlin"]]> Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration with David Moss, as we have learned so much about performance, improvisation and music in general from him on earlier occations.
Earlier the same week I had presented the crossadaptive project for prof. De Campo’s students of computational art and performance with complex systems. This environment of arts and media studies at UdK was particularly conductive to our research, and we had some very interesting discussions.

David Moss – vocals
Øyvind Brandtsegg – crossadaptive processing, vocals
Alberto De Campo – observer
Marija Mickevica – observer, and vocals on one take

 

More details on these tracks will follow, currently I just upload them here so that the involved parties might get access.

DavidMoss_Take0
 DavidMoss_Take0

Initial exploration, the (becoming) classic reverb+delay crossadaptive situation

DavidMoss_Onefx1
 DavidMoss_Onefx1

Test session, exploring one effect only

DavidMoss_Onefx2
 DavidMoss_Onefx2

Test session, exploring one effect only  (2)

DavidMoss_Take1
 DavidMoss_Take1

First take

DavidMoss_Take2
 DavidMoss_Take2

Second take

DavidMoss_Take3
 DavidMoss_Take3

Third take

Then we did some explorations of David telling stories, live convolving with Øyvind’s impulse responses.

DavidMoss_Story0
 DavidMoss_Story0

Story 1

DavidMoss_Story1
 DavidMoss_Story1

Story 2

And we were lucky that student Marija Mickevica wanted to try recording live impulse responses while David was telling stories. Here’s an example:

DavidMoss_StoryMarija
 DavidMoss_StoryMarija

Story with Marija’s impulse responses

And a final take with David and Øyvind, where all previously tested effects and crossadaptive mappings were enabled. Selective mix of effects and modulations was controlled manually by Øyvind during the take.

DavidMoss_LastTake
 DavidMoss_LastTake

Final combined take

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/02/session-with-david-moss-in-berlin/feed/ 0 1170
Convolution demo sounds http://crossadaptive.hf.ntnu.no/index.php/2017/12/07/convolution-demo-sounds/ http://crossadaptive.hf.ntnu.no/index.php/2017/12/07/convolution-demo-sounds/#comments Thu, 07 Dec 2017 12:12:04 +0000 http://crossadaptive.hf.ntnu.no/?p=1129 Continue reading "Convolution demo sounds"]]> The convolution techniques we’ve developed in this project are aimed at live performance and are so far mostly used for that purpose. The manner in which the filters can be continuously updated might also allow for other creative applications within convolution and filtering. As a very basic demo of how the filters sound with prerecorded sounds, we’ve done two audio examples. Do note that the updating of the filters are still realtime, it is just the source sounds playing into the filters that are prerecorded.

Source

The source sounds are prepared as a stereo file, with sound A in the left channel and sound B in the right channel. Sound A is assembled from sounds downloaded from freesound.org , all made by user Corsica_S (files “alpha.wav”, “bravo.wav”, “charlie.wav”, “delta.wav”). Sound B is a drumloop programmed by Oeyvind Brandtsegg some time in the late 90’s.

conv_sources
 conv_sources

Source sounds for the convolution demo.

 

Liveconv

Sound A is recorded as the impulse response for liveconv. The length of the filter is approx 0.5 seconds (the duration of the spoken word used as source). It is replaced approx every 5.5 seconds, manually automated to line up with when the appropriate word appears in the source. This way, the first IR contains the word “alpha”, then it is replaced after 5.5 seconds with the word “bravo”, and so on until all 4 words have been used as an IR.

conv_liveconv
 conv_liveconv

Liveconv demo using above source sounds.

tvconv

Sound A and sound B runs continuously through the filter. No freezing of coefficients is applied. The filter length is 23768 samples, approximately 0.74 seconds at a sample rate of 44.1 kHz.

conv_tvconv
 conv_tvconv

tvconv demo using the above source sounds.

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/12/07/convolution-demo-sounds/feed/ 2 1129
Crossadaptive seminar Trondheim, November 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/#respond Sun, 05 Nov 2017 18:27:58 +0000 http://crossadaptive.hf.ntnu.no/?p=1093 Continue reading "Crossadaptive seminar Trondheim, November 2017"]]> As  part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to more detailed documentation of each session as it becomes available. Each part will be added as the documentations is ready, so if something is missing now, do check back later. The seminar was also streamed here, and the recorded streams will be archived.

In addition to the researchers presenting, we also had an audience of students from the music technology and the jazz departments, as well as other researchers and teachers from NTNU. We are grateful for the input from the audience to enrich our discussions.

Program:

Thursday 2. November

Practical experiments

Introduction and status. [slides]
Øyvind Brandtsegg

 

Performance
Maja S.K. Ratkje, Øyvind Brandtsegg, Miller Puckette (standin for Stian Westerhus)

 

Work methods and session reports. Experiences, insights, reflections.
Trond Engum (with team)[slides],  Bernt Isak Wærstad (with team) [slides]

Instruments and tools

Instrumental affordances, crossadaptivity as instrumental gesture.
Marije Baalman [slides]

 


Performance
Tone Åse, Carl Haakon Waadeland, Trond Engum

 

Instrument design and technological developments. [slides]
Sigurd Saue, Victor Lazzarini, Øyvind Brandtsegg

 

Friday 3. November

Reflection. Aesthetic and philosophical issues

Documentation methods [slides]
Andreas Bergsland


Performance
Bjørnar Habbestad, Gyrid N. Kaldestad, Bernt Isak Wærstad

 

What does it mean for the performer, for the audience, for the music? How does it change the game?
Solveig Bøe [notes], Simon Emmerson [slides]

Wider use and perspectives

Experiences with Philippe Manoury and Juliana Snapper, thoughts on instrumental control, and a performance
Miller Puckette [PD patches]
(with Øyvind Brandtsegg for a brief liveconvolver performance)

 

Looking at the music from the mix perspective. Viability of effects as expression. The wider field of automatic mixing and adaptive effects.
Gary Bromham [slides], Josh Reiss [slides]

 

Outcomes and evaluation. [slides]
Moderator: Øyvind Brandtsegg

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/feed/ 0 1093