Convolution demo sounds

The convolution techniques we’ve developed in this project are aimed at live performance and are so far mostly used for that purpose. The manner in which the filters can be continuously updated might also allow for other creative applications within convolution and filtering. As a very basic demo of how the filters sound with prerecorded sounds, we’ve done two audio examples. Do note that the updating of the filters are still realtime, it is just the source sounds playing into the filters that are prerecorded.

Source

The source sounds are prepared as a stereo file, with sound A in the left channel and sound B in the right channel. Sound A is assembled from sounds downloaded from freesound.org , all made by user Corsica_S (files “alpha.wav”, “bravo.wav”, “charlie.wav”, “delta.wav”). Sound B is a drumloop programmed by Oeyvind Brandtsegg some time in the late 90’s.


Source sounds for the convolution demo.

Liveconv

Sound A is recorded as the impulse response for liveconv. The length of the filter is approx 0.5 seconds (the duration of the spoken word used as source). It is replaced approx every 5.5 seconds, manually automated to line up with when the appropriate word appears in the source. This way, the first IR contains the word “alpha”, then it is replaced after 5.5 seconds with the word “bravo”, and so on until all 4 words have been used as an IR.


Liveconv demo using above source sounds.

tvconv

Sound A and sound B runs continuously through the filter. No freezing of coefficients is applied. The filter length is 23768 samples, approximately 0.74 seconds at a sample rate of 44.1 kHz.


tvconv demo using the above source sounds.

Crossadaptive seminar Trondheim, November 2017

As  part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to more detailed documentation of each session as it becomes available. Each part will be added as the documentations is ready, so if something is missing now, do check back later. The seminar was also streamed here , and the recorded streams will be archived.

In addition to the researchers presenting, we also had an audience of students from the music technology and the jazz departments, as well as other researchers and teachers from NTNU. We are grateful for the input from the audience to enrich our discussions.

Program:

Thursday 2. November

Practical experiments

Introduction and status. [slides]
Øyvind Brandtsegg

Performance
Maja S.K. Ratkje, Øyvind Brandtsegg, Miller Puckette (standin for Stian Westerhus)

Work methods and session reports. Experiences, insights, reflections.
Trond Engum (with team) [slides] ,  Bernt Isak Wærstad (with team) [slides]

Instruments and tools

Instrumental affordances, crossadaptivity as instrumental gesture.
Marije Baalman [slides]


Performance
Tone Åse, Carl Haakon Waadeland, Trond Engum

Instrument design and technological developments. [slides]
Sigurd Saue, Victor Lazzarini, Øyvind Brandtsegg

Friday 3. November

Reflection. Aesthetic and philosophical issues

Documentation methods [slides]
Andreas Bergsland


Performance
Bjørnar Habbestad, Gyrid N. Kaldestad, Bernt Isak Wærstad

What does it mean for the performer, for the audience, for the music? How does it change the game?
Solveig Bøe [notes] , Simon Emmerson [slides]

Wider use and perspectives

Experiences with Philippe Manoury and Juliana Snapper, thoughts on instrumental control, and a performance
Miller Puckette [PD patches]
(with Øyvind Brandtsegg for a brief liveconvolver performance)

Looking at the music from the mix perspective. Viability of effects as expression. The wider field of automatic mixing and adaptive effects.
Gary Bromham [slides ], Josh Reiss [slides]

Outcomes and evaluation. [slides]
Moderator: Øyvind Brandtsegg