Exploring radically new modes of musical interaction in live performance
Category: Performative
Master thesis at UCSD (Jordan Morton)
October 11, 2018
-
Jordan Morton write about the experiences within the crossadaptive project in her master thesis at the University of California San Diego. The title of the thesis is "Composing A Creative Practice: Collaborative Methodologies and Sonic Self-Inquiry In The Expansion Of ...
Concert presentation at the Artistic Research Forum in Bergen
October 11, 2018
-
We presented the project during the Artistic Reaearch Forum in Bergen on September 24th 2018. The presentation consisted of a short introduction of the concept, tools and methodologies followed by a 20 minute musical performance showing several of the crossadaptive ...
Concert at Dokkhuset, May 2018
October 11, 2018
-
The project held a concert at Dokkhuset on May 26th. This concert was made as a presentation of the artistic outcome, towards the end of the project. Assuming there is no "final result" of an artistic process, but still representing ...
Session with Kim Henry Ortveit
February 22, 2018
-
Kim Henry is currently a master student at NTNU music technology, and as part of his master project he has designed a new hybrid instrument. The instrument allows close interactions between what is played on a keyboard (or rather a ...
Session with David Moss in Berlin
February 2, 2018
-
Thursday February 1st, we had an enjoyable session at the Universität der Kunste in Berlin. This was at the Grunewaldstraße campus and generously hosted by professor Alberto De Campo. This was a nice opportunity to follow up on earlier collaboration ...
Crossadaptive seminar Trondheim, November 2017
November 5, 2017
-
As part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to ...
Adaptive Parameters in Mixed Music
October 23, 2017
-
Adaptive Parameters in Mixed Music Introduction During the last several years, the interplay between processed and acoustic sounds has been the focus of research at the department of music technology at NTNU. So far, the project “Cross-adaptive processing as musical ...
Session with 4 singers, Trondheim, August 2017
October 9, 2017
-
Location: NTNU, Studio Olavshallen. Date: August 28 2017 Participants: Sissel Vera Pettersen, vocals Ingrid Lode, vocals Heidi Skjerve, vocals Tone Åse, vocals Øyvind Brandtsegg, processing Andreas Bergsland, observer and video documentation Thomas Henriksen, sound engineer Rune Hoemsnes, sound engineer We ...
First reflections after Studio A session
September 8, 2017
-
This post merely sums up some of the thoughts rotating in my head right after this session in May 2017, and then again some more reflections that occured during the mixing process together with Andrew Munsie (in June). Some of ...
Session in UCSD Studio A
September 8, 2017
-
This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all ...
Session with Jordan Morton and Miller Puckette, April 2017
June 9, 2017
-
This session was conducted as part of preparations to the larger session in UCSD Studio A, and we worked on calibration of the analysis methods to Jordans double bass and vocals. Some of the calibration and accomodation of signals also includes ...
Playing or being played – the devil is in the delays
June 9, 2017
-
Since the crossadaptive project involves designing relationsips between performative actions and sonic responses, it is also about instrument design in a wide definition of the term. Some of these relationships can be seen as direct extensions to traditional instrument features, ...
Live convolution session in Oslo, March 2017
June 7, 2017
-
Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) Mats Claesson (documentation and observation). The focus for this session was to work with the new live convolver in Ableton Live Setup - getting to know the Convolver We ...
Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017
June 7, 2017
-
Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice) The focus for this session was to play with, fine tune and work further on the mappings we sat up during the last session at NMH in November. Due ...
The entrails of Open Sound Control, part one
April 7, 2017
-
Many of us are very used to employing the Open Sound Control (OSC) protocol to communicate with synthesisers and other music software. It's very handy and flexible for a number of applications. In the cross adaptive project, OSC provides the ...
Cross adaptive session with 1st year jazz students, NTNU, March 7-8
April 6, 2017
-
This is a description of a session with first year jazz students at NTNU recorded March 7 and 8. The session was organized as part of the ensemble teaching that is given to jazz students at NTNU, and was meant to take care ...
Live convolution with Kjell Nordeson
March 23, 2017
-
Session at UCSD March 14. Kjell Nordeson: Drums Øyvind Brandtsegg: Vocals, Convolver. Contact mikes In this session, we explore the use of convolution with contact mikes on the drums to reduce feedback and cross-bleed. There is still some bleed from ...
Conversation with Marije, March 2017
March 20, 2017
-
After an inspiring talk with Marije on March 3rd, I set out to write this blog post to sum up what we had been talking about. As it happens (and has happened before), Marije had a lot of pointer to ...
Seminar: Mixing and timbral character
March 2, 2017
-
Online conversation with Gary Bromham (London), Bernt Isak Wærstad (Oslo), Øyvind Brandtsegg (San Diego), Trond Engum and Andreas Bergsland (Trondheim). Gyrid N. Kaldestad, Oslo, was also invited but unable to participate. The meeting revolves around the issues "mixing and timbral ...
Convolution experiments with Jordan Morton
March 1, 2017
-
Jordan Morton is a bassist and singer, she regularly performs using both instruments combined. This provides an opportunity to explore how the liveconvolver can work when both the IR and the live input are generated by the same musician. We did a ...
Session UCSD 14. Februar 2017
February 15, 2017
-
Session objective The session objective was to explore the live convolver, how it can affect our playing together and how it can be used. New convolver functionality for this session is the ability to trigger IR update via transient detection, ...
Seminar 16. December
February 3, 2017
-
Philosophical and aesthetical perspectives –report from meeting 16/12 Trondheim/Skype Andreas Bergsland, Trond Engum, Tone Åse, Simon Emmerson, Øyvind Brandtsegg, Mats Claesson The performers experiences of control: In the last session (Trondheim December session) Tone and Carl Haakon (CH) worked with ...
Concerts and presentations, fall 2016
December 15, 2016
-
A number of concerts, presentations and workshops were given during October and November 2016. We could call it the 2016 Crossadaptive Transatlantic tour, but we won’t. This post gives a brief overview. Concerts in Trondheim and Göteborg BRAK/RUG was scheduled ...
Oslo, First Session, October 18, 2016
December 12, 2016
-
First Oslo Session. Documentation of process 18.11.2016 Participants Gyrid Kaldestad, vocal Bernt Isak Wærstad, guitar Bjørnar Habbestad, flute Observer and Video Mats Claesson The Session took place in one of the sound studios at the Norwegian Academy of Music, Oslo ...
Seminar at De Montfort
June 14, 2016
-
Wednesday June 8th we visited Simon Emmerson at De Montfort and also met Director Leigh Landy. We were very well taken care of and had a pleasant and interesting stay. One of the main objectives was to do seminar ...
Project start meeting in Trondheim
June 14, 2016
-
Kickoff Monday June 6th we had a project start meeting with the NTNU based contributors: Andreas Bergsland, myself, Solveig Bøe, Trond Engum, Sigurd Saue, Carl Haakon Waadeland and Tone Åse. This gave us the opportunity to present the current state ...
As part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to more detailed documentation of each session as it becomes available. Each part will be added as the documentations is ready, so if something is missing now, do check back later. The seminar was also streamed
here
, and the recorded streams will be archived.
In addition to the researchers presenting, we also had an audience of students from the music technology and the jazz departments, as well as other researchers and teachers from NTNU. We are grateful for the input from the audience to enrich our discussions.
Program:
Thursday 2. November
Practical experiments
Introduction and status.
[slides]
Øyvind Brandtsegg
Performance
Maja S.K. Ratkje, Øyvind Brandtsegg, Miller Puckette (standin for Stian Westerhus)
Work methods and session reports. Experiences, insights, reflections.
Trond Engum (with team)
[slides]
, Bernt Isak Wærstad (with team)
[slides]
Instruments and tools
Instrumental affordances, crossadaptivity as instrumental gesture.
Marije Baalman
[slides]
Performance
Tone Åse, Carl Haakon Waadeland, Trond Engum
Instrument design and technological developments.
[slides]
Sigurd Saue, Victor Lazzarini, Øyvind Brandtsegg
Performance
Bjørnar Habbestad, Gyrid N. Kaldestad, Bernt Isak Wærstad
What does it mean for the performer, for the audience, for the music? How does it change the game?
Solveig Bøe
[notes]
, Simon Emmerson
[slides]
Wider use and perspectives
Experiences with Philippe Manoury and Juliana Snapper, thoughts on instrumental control, and a performance
Miller Puckette
[PD patches]
(with Øyvind Brandtsegg for a brief liveconvolver performance)
Looking at the music from the mix perspective. Viability of effects as expression. The wider field of automatic mixing and adaptive effects.
Gary Bromham
[slides
], Josh Reiss
[slides]
Outcomes and evaluation.
[slides]
Moderator: Øyvind Brandtsegg
During the last several years, the interplay between processed and acoustic sounds has been the focus of research at the department of music technology at NTNU. So far, the project “Cross-adaptive processing as musical intervention” has been mainly concerned with improvised music such as shown with the music of T-EMP. However, through discussions with Øyvind Brandtsegg and several others at NTNU, I have come to find several aspects of their work interesting in the world of written composition, especially mixed music. Frengel (2010) defines this as a form of electroacoustic music which combines a live and/or acoustic performer with electronics. During the last few years I have especially come to be interested in Philippe Manoury’s conception of mixed music which he calls real-time music (
la musique du temps réel
). This puts emphasis on the idea of real-time versus deferred time which we will come back to later in this article.
The aspect of the cross-adaptive synthesis project which makes the most resonance with mixed music is the idea of adaptive parameters in its most basic form. A parameter X of a sound, influences parameter Y of another sound. To this author, this idea can be directly correlated to Philippe Manoury’s conception of
partitions virtuelles
which could be translated into English as virtual partition or sheet music. In this article, we will explore some of the links between these concepts, and especially how they can start to influence our compositional practices in mixed music. A few examples of how a few composers have used adaptive parameters will also be given. It is also important to point out that the composers named in this article are far from being the only ones using adaptive parameters, they have done it in either an outstanding and/or very pedagogical way when it comes to the technical aspect of adaptive parameters.
Partitions virtuelles, R
elative Values & Absolute Values
Let us first establish the term virtual partition. Manoury’s (1997) conception comes originally from discussions with the late Pierre Boulez, although it was Manoury that expanded on the concept and extended it through Compositions such as “En écho” (1993-1994), “Neptune” (1991), and “Jupiter” (1987).
Virtual partition is a concept that refers directly to the idea of notation as it is used with traditional acoustic music. If we think of Beethoven’s piano sonatas, the notation tells us which notes to play (ie. C4 followed by E4, etc). It might also contain some tempo marks, dynamic markings and phrasing. Do these parameters form the complete list of parameters that a musician can play? The simple answer is no. These are only basic, and often parameters of relative value. A fortissimo symbol does not mean “play at 90 dB”, the symbol is relative compared to what has come before and what will come after. The absolute parameter that we have in this case, is the notes that are written in the sheet music. An A4 will remain an A4. This duality of seeing absolute against relative is also what has given us the world of interpretation. Paul Lewis will not play the same piece in the exact same way as András Schiff even though they will be playing the exact same notes. This is a vital part of the richness of the classical repertoire: its interpretation.
As Manoury points out, in early electroacoustic music this possibility of interpretation was not possible. The so-called “tyranny of tape” made it rather difficult to have more relative values, and in fact also limited the relative values of the live musicians as their tempo and dynamics had to take into consideration a medium that could not be changed then and there. This aspect of the integration of the electronics is a large field into itself which interests this author very much. Although an exhaustive discussion of this is far beyond the scope of this article, it should be noted that adaptive parameters can be used with most of these integration techniques. However, at this point this author does believe that score following and its real-time relative techniques permit this level of interactivity on a higher plane than say the scene system such as in the piece “Lichtbogen” (1986-1987) by Kaija Saariaho where the electronics are set at specific levels until the next scene starts.
Therefore, the main idea of the virtual partition is to bring interpretation into the use of electronics within mixed music. The first aspect to be able to do so is to work with both relative and absolute variables. How does this relate to adaptive parameters? By using electronics that have several relative values that are influenced either by the electronics themselves, the musician(s) or a combination of both, it becomes possible to bring interpretation to the world of electronics. In other words, the use of adaptive parameters within mixed music can bring more interpretation and fluidity into the genre.
Time & Musical Time
The traditional complaint and/or reason for not implementing adaptive parameters/virtual partitions in mixed music has been its difficulty. The Cross-adaptive synthesis project has proven that complex and beautiful results can be made with adaptive parameters. The flexibility of the electronics can only make the rapport between the musicians and computer a more positive one. Another reason that has often been cited is if the relationships would be clear to the listener. This author feels that this criticism is slightly oblique. The use of adaptive parameters may not always be heard by the audience, but it does influence how much the performer(s) feel connected to the electronics. It is also the basis of being able to create an electronic interpretation. Composers like Philippe Manoury for example, believe that the use of a real-time system with adaptive parameters is the only way to preserve the expressivity while working between electronic and acoustic instruments (Ramstrum, 2006).
Another problem that has often come up when it comes to written music, is how to combine the precise activity of contemporary writing with a computer? Back in the 80’s composers like Philippe Manoury often had to use the help of programmers such as Miller Puckette to come up with novel ideas (which in their case would later lead to the development of MaxMSP and Pure Data). However, the arrival of more stable score followers and recognizers (a distinction brought to the light in Manoury, 1997, p.75-78) has made it possible to think of electronics within musical time (ex. quarter note) instead of absolute time (ex. milliseconds). This also allows a composer to further integrate the interpretation of the electronics directly into the language of the score. One could understand this as the computer being able to translate information from a musical language to a binary language.
To give a simple example, we can assume that the electronic process we are controlling is a playback file on a computer. In the score of our imaginary piece, in 5 measures the solo musician must go from fortissimo to pianissimo and eventually fading out
niente
(meaning gradually fading to silence). As this is happening, the composer would like the pitch of the playback file to rise, as the amplitude of the musician goes down. In absolute time, one would have to program the number of milliseconds each measure should take and hope that the musician and computer would be in sync. However, by using relative time it is easier, as one can tell the computer that the musician will be playing 5 measures. If we combine this with adaptive parameters, we can directly (or indirectly, if preferred) connect the amplitude of the musician to the pitch of the playback for those 5 measures. This idea, which seemed to be most probably a dream for composers in the 80’s has now become reality because of programs like Antescofo (described in Cont, 2008) as well as the concept of adaptive parameters which is becoming more commonplace.
The use of microphones and/or other sensors allows us to extract more information from the musician(s) and one can connect these to specific parameters however directly or indirectly one wishes. Øyvind Brandtsegg’s software also allows the extraction of many features of sound(s) and map these to audio processing parameters. In combination with a score follower or other integration methods, this can be an incredibly powerful tool to organize a composition.
Examples from the mixed music repertoire should also be named to give inspiration but also show that it is in fact possible, and already in use. Philippe Manoury has for example used parameters of the live musician(s) to calculate synthesis in many pieces ranging from “Pluton” (1988) to his string quartet “Tensio” (2010). In both pieces, several parameters (such as pitch) are used to create Markov chains and to control synthesis parameters.
Let us look a bit deeper into Manoury’s masterpiece “Jupiter” (1987). Although this piece is an early example of the advanced use of electronics with a computer, the use of adaptive parameters was used actively. For a detailed analysis and explanation of the composition, refer to May (2006). In sections VI and XII the flutist’s playing influences the change of timbre over time. The attack of the flutist’s notes (called note-incipits in May, 2006) control the frequency of the 28 oscillators of the chapo synthesizer – a type of additive synthesis with spectral envelope (Ibid, p. 149) – and its filters. The score also shows these changes in timbre (and pitch) throughout the score (Manoury, 2008, p.32). This is around the 23:06 mark in the recording done my Elizabeth McNutt (2001).
Several other sections’ temporality is also influenced by the performer. For example, in section II, the computer will record short instances of the performer which only be used later in section V and IX in which these excerpts are interpolated and then morphed into tam-tam hits or piano. As May (2006) notes, the flutist’s line directly affects the shape, speed and direction of the interpolations marking a clear relationship between both parts. The sections in which the performer is recorded is clearly marked in the score. In several of the interpolation sections, the electronics take the lead and the performer is told to wait for certain musical elements. The score is also clear about how the interpolations are played and how they are directly related to the performer’s actions.
The composer Hans Tutschku has also used this idea in his series “Still Air” which currently features three compositions (2013, 2014, 2014). In all the pieces, the musicians are to play along to an iPad which might be easier than installing MaxMSP, a soundcard and external microphone for many musicians. The iPad’s built in microphone is used to measure the musicians’ amplitude which varies the amplitude and pitch of the playback files. The exact relationship between the musician and electronics vary throughout the composition. This means that the way the amplitude of the signal modifies the pitch and loudness of the playback part will vary depending on the current event number which are shown in the score (Tutschku, personal communication, October 10, 2017). This use of adaptive parameters is simple and still permits the composer to have a large influence between performer and computer.
A third and final example is “Mahler in Transgress” (2002-2003) by Flo Menezes, especially its restoration done in 2007 by Andre Perrotta which uses Kyma (Perrotta, personal communication, October 6, 2017). Parameters from both pianists are used to control several aspects of the electronics. For example, the sound of one piano could filter the other, or one’s amplitude could affect the spectral profile of the other. Throughout the duration of the composition, the electronics are mainly processing both pianos, as they influence their timbre between each other. This creates a clear relationship between what both performers are playing and the electronics that the audience can hear.
These are only three examples that this author believes shows many different possibilities for the use of adaptive parameters in through-composed music. It is by no means meant to be an exhaustive list, but only as a start to have a common language and understanding on adaptive parameters in mixed music.
A Composition Must Be Like the World or… ?
Gustav Mahler’s (perhaps apocryphal) citation “A symphony must be like the world. It must contain everything” is often popular with composition students. However, this author’s understanding of composition especially in the field of mixed music has led him to believe that a composition should be a world on its own. Its rules should be guided by the poetical meaning of the composition. The rules and ideas that will suit one composition’s electronics and structures, will not necessarily suit another composition. Each composition (its whole of the score and the computer program/electronics) forms its own world.
With this article, this author does not wish to delve into a debate over aesthetics. However, these concepts and possibilities tend to go towards music done in real-time. For years, it was thought that the possibilities of live electronics were limited, but in recent years this has not been the case. The field is still ripe with the possibilities of experimentation within the world of through-composed music.
At this point in time, this author is also experimenting directly with the concept of cross-adaptive synthesis written into through-composed music. I see no reasons as to why it shouldn’t be used both in and out of freely improvised music. We should think of technology and its concepts not within aesthetic boundaries, but as how we can use it for our own purposes.
Bibliography
Cont, A. (2008). “ANTESCOFO: Anticipatory synchronization and control of interactive parameters in computer music.” in
Proceedings of international computer music conference (ICMC).
Belfast, August 2008.
Frengel, M. (2010). A Multidimensional Approach to Relationships between Live and Non-live Sound Sources in Mixed Works.
Organised Sound
,
15
(2), 96–106. https://doi.org/10.1017/S1355771810000087
Manoury, P. (1997). Les partitions virtuelles in «
La note et le son: Écits et entretiens 1981-1998 »
. Paris, France: L’Harmattan.
Manoury, P. (1998). Jupiter (score, originally 1987, revised in 1992, 2008). Italy: Universal Music MGB Publishing.
May, Andre (2006). Philippe Manoury’s Jupiter. In Simoni, Mary (Ed.), in
Analytical methods of electroacoustic music
(pp. 145-186). New York, USA : Routledge Press.
Ramstrum, Momilani (2006). Philippe Manoury’s opera K…. In Simoni, Mary (Ed.), in
Analytical methods of electroacoustic music
(pp. 239-274). New York, USA : Routledge Press.
Participants:
Sissel Vera Pettersen, vocals
Ingrid Lode, vocals
Heidi Skjerve, vocals
Tone Åse, vocals
Øyvind Brandtsegg, processing
Andreas Bergsland, observer and video documentation
Thomas Henriksen, sound engineer
Rune Hoemsnes, sound engineer
We also had the NTNU documentation team (Martin Kristoffersen and Ola Røed) making a separate video recording of the session.
Session objective and focus:
We wanted to try out crossadaptive processing with similar instruments. Until this session, we had usually used it on a combination of two different instruments, leading to very different analysis conditions. The analysis methods reponds a bit differently to each instrument type, and they also each “trigger” the processing in particular manner. It was thought interesting to try some experiments under more “even” conditions. Using four singers and combining them in different duo configurations, we also saw the potential for gleaming personal expressive differences and approaches to the crossadaptive performance situation. This also allowed them to switch roles, i.e. performing under the processing condition where they previously had the modulating role. No attempt was made to exhaustively try every possible combination of roles and effects, we just wanted to try a variety of scenarios possible with the current resources. The situation proved interesting in so many ways, and further exploration of this situation would be neccessary to probe further the research potential herein.
In addition to the analyzer-modulator variant of crossadaptive processing, we also did several takes of
live convolution
and
streaming convolution
. This session was the very first performative exploration of streaming convolution.
We used a reverb (Valhalla) on one of the signals, and a custom granular reverb (partikkelverb) on the other. The crossadaptive mappings was first designed so that each of the signals could have a “prolongation” effect (larger size for the reverb, more time smearing for the granular effect). However, after the first take, it seemed that the time smearing for the granular effect was not so clearly perceived as a musical gesture. We then replaced the time smearing parameter of the granular effect with a “graininess” parameter (controlling grain duration). This setting was used for the remaining takes. We used transient density combined with amplitude to control the reverb size, where louder and faster singing would make the reverb shorter (smaller). We used dynamic range to control the time smearing parameter of the granular effect, and used transient density to control the grain size (faster singing makes the grains shorter).
Video digest of the session
Crossadaptive analyzer-modulator takes
Crossadaptive take 1: Heidi/Ingrid
Heidi has a reverb controlled by Ingrids amplitude and transient density
– louder and faster singing makes the reverb shorter
Ingrid has a time smearing effect.
– time is more slowed down when Heidi use a larger dynamic range
Crossadaptive take 2: Heidi/Sissel
Heidi has a reverb controlled by Sissels amplitude and transient density
– louder and faster singing makes the reverb shorter
Sissel has a granular effect.
– the effect is more grainy (shorter grain duration) when Heidi play with a higher transient density (faster)
Crossadaptive take 3: Sissel/Tone
Sissel has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Sissel play with a higher transient density (faster)
Crossadaptive take 4: Tone/Ingrid
Ingrid has a reverb controlled by Tones amplitude and transient density
– louder and faster singing makes the reverb shorter
Tone has a granular effect.
– the effect is more grainy (shorter grain duration) when Ingrid play with a higher transient density (faster)
Crossadaptive take 5: Tone/Ingrid
Same settings as for take 4
Convolution
Doing live convolution with two singers was thought interesting for the same reasons as listed in the introduction, creating a controlled scenario with two similarly-featured signals. As vocal is in itself one of the richest instruments in terms of signal variation, it was also intersting to explore convolution wwith these instruments. We used the now familiar live convolution techniques, where one of the performers record an impulse response and the other plays through it. In addition, we explored
streaming convolution
, developed by Victor Lazzarini as part of this project. In streaming convolution, the two signals are treated even more equally that what is the case in live convolution. Streaming convolution simply convolves two circular buffers of a predetermined length, allowing both signals to have the exact same role in relation to the other. It also has a “freeze mode”, where updating of the buffer is suspended, allowing one or the other (or both) of the signals to be kept stationary as a filter for the other. This freezing was controlled by a physical pedal, in the same manner as we use a pedal to control IR sampling with live convolution. In some of the videos one can see the singers raising their hand, as a signal to the other that they are now freezing their filter. When the signal is not frozen (i.e. streaming), there is a practically indeterminate latency in the process as seen from the performer’s perspective. This stems from the fact that the input stream is segmented with respect to the filter length. Any feature recorded into the filter will have a position in the filter dependent on when it was recorded, and the perceived latency between an input impulse and the convolver output of course relies on where in the “impulse response” the most significant energy or transient can be found. The techical latency of the filter is still very low, but the perceived latency depends on the material.
Liveconvolver take 1: Tone/Sissel
Tone records the IR
Liveconvolver take 2: Tone/Sissel
Sissel records the IR
Liveconvolver take 3: Heidi/Sissel
Sissel records the IR
Liveconvolver take 4: Heidi/Sissel
Heidi records the IR
Liveconvolver take 5: Heidi/Ingrid
Heidi records the IR
Streaming Convolution
These are the very first performative explorations of the streaming convolution technique.
This post merely sums up some of the thoughts rotating in my head right after this session in May 2017, and then again some more reflections that occured during the mixing process together with Andrew Munsie (in June). Some of the tracks are not yet mixed, and these have been sent to Gary Bromham, so we can get some reflections from him too during his mixing process.
For this session I had made a set of mappings for each duo configuration, trying to guess what would be interesting. I had made mappings that would be relatively rich and organic, some being more obvious and large-gesture, other being more nuanced and small-scale modulations. The intent was to create instruments that could be used for a longer time stretch without getting “worn out”. The mappings contained 4 to 8 features from each instrument, mapped to modulate 3 to 7 effects parameters on the other instrument. This going on in both directions (one musician modulating the other and vice versa), it sums up to a pretty complex interaction scenario. There is nothing magic about these numbers (number of features and modulators), it just happened to be the amount of modulation mappings I could conceptualize as reasonable combinations. The number of modulations (effect parameter destinations) is slightly less than the number of features becaause I would oftentimes combine several features (add, gate, etc) for each modulator. Still, I would also re-use some features for several modulators, so the number of features just slightly higher than the modulators.
Right after the session:
Reflection on things that could have been done differently: I think that it might perhaps have been better to use simpler parameter mappings, to get something that would be very obvious and clear, very distinctly gestural. This would perhaps have been easier for the musicians to relate intuitively to. Subtle and complex mappings are nice, but may just create a mushy din. Since they will be partly “hidden” to the musicians (due to subtlety of mapping, and also the signal balance during performance), they will not be finely controlled. Thus, to some extent, they will be randomly related to the performative gestures. Complexity adds noise too (on could say), both for performer and for listeners of the music. Selection of effects is also just as important as the parameter mappings. Try to make something that is more gesturally responsive. One specific element that was problematic was the delay time change without pitch modification. Perhaps this is not so great. Not easy to control for the performer, and not easy perceived (by the other performer, or for an external listener) either. Related to the liveconvolver takes, I realize that the convolver effect is not so much gestural, but more a block-wise imposition of one sound on another. (Obvious enough when one think about it, but still worth mentioning).
Reflections during mixing:
We hear rich interactions, the subtle nuances work well (contrary to reflections right after the session). One does not really have to
decode
or intellectualize the mapping, just go with the flow, listen. Sometimes I
listen for
a specific modulation and totally loose the context and musical meaning. Still, in the mixing process, this is natural and necessary.
The comments of Kyle and Steven that they “would play the same anyway” comes in a different light now, as it is hard to imagine you would not change the performance in response to the processing. The instruments and the processing constitutes a whole, perhaps more easily perceived as a whole now in hindsight, but this may very well relate to listening habit. Getting to know this musical situation better will make the whole easier to perceive during performance. Still, the musicians would to some extent not expressively utilize the potential of the complex mappings. This is partly because they did not know exactly the details, …perhaps I got exactly what I set it up to do: not telling the mapppings and making them “rich”.
This session was done May 11th in Studio A at UCSD. I wanted to record some of the performer constellations I had worked with in San Diego during Fall 2016 / Spring 2017. Even though I had worked with all these performers in different constellations, some new combinations were tested this day. The approach was to explore fairly complex feature-modulator mappings. No particular focus was made on intellectualizing the details of these mappings, but rather experiencing them as a whole, “as instrument”. I had found that simple mappings, although easy to decode and understand for both performer and listener, quickly would “wear out” and become flat, boring or plainly limiting for musical development during the piece. I attempted to create some “rich” mappings, with combinations of different levels of subtlety. Some clearly audible and some subtle timbral effects. The mappings were designed with some specific musical gestures and interactions in mind, and these are listed together with the mapping details for each constellation later in this post.
During this session, we also explored the
live convolver
in terms of how the audio content in the IR affects the resulting creative options and performative environment for the musician playing through the effect. The liveconvolver takes are presented interspersed with the crossadaptive “feature-modulator” (one could say “proper crossadaptive”) takes. Recording of the impulse response for the convolution was triggered via an external pedal controller during performance, and we let each musician in turn have the role of IR recorder.
Participants:
Jordan Morton: double bass and voice
Miller Puckette: guitar
Steven Leffue: sax
Kyle Motl: double bass
Oeyvind Brandtsegg: crossadaptive mapping design, processing
Andrew Munsie: recording engineer
The music played was mostly free improvisations, but two of the takes with Jordan Morton was performances of her compositions. These were composed in dialogue with the system, during and in between, earlier sessions. She both plays the bass and sings, and wanted to explore how phrasing and shaping of precomposed material could be used to expressively control the timbral modulations of the effects processing.
Jordan Morton: bass and voice.
These pieces are composed by Jordan, and she has composed it with an intention of being performed freely, and shaped according to the situation at performance time, allowing the crossaptive modulations ample room for influence on the sound.
“I confess” (Jordan Morton). Bass and voice.
“Backbeat thing” (Jordan Morton). Bass and voice.
The effects used:
Effects on vocals: Delay, Resonant distorted lowpass
Effects on bass: Reverb, Granular tremolo
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Bass spectral flatness, and
Bass spectral flux: both features giving lesser reverb time on bass
Purpose: When the bass becomes more noisy, it will get less reverb
Vocal envelope dynamics (dynamic range), and
Vocal transient density: both features giving lower lowpass filter cutoff frequency on reverb on bass
Purpose: When the vocal becomes more active, the bass reverb will be less pronounced
Bass transient density: higher cutoff frequency (resonant distorted lowpass filter) on vocal
Purpose: to animate a distorted lo-fi effect on the vocals, according to the activity level on bass
Vocal mfcc-diff (formant strength, “pressed-ness”): Send level for granular tremolo on bass
Purpose: add animation and drama to the bass when the vocal becomes more energetic
Bass transient density: lower lowpass filter frequency for the delay on vocal
Purpose: clean up vocal delays when basse becomes more active
Vocal transient density: shorter delay time for the delay on vocal
Bass spectral flux: longer delay time for the delay on vocal
Purpose: just for animation/variation
Vocal dynamic range, and
Vocal transient density: both features giving less feedback for the delay on vocal
Purpose: clean up vocal delay for better articulation on text
Liveconvolver tracks Jordan/Jordan:
The tracks are improvisations. Here, Jordan’s voice was recorded as the impulse response and she played bass through the voice IR. Since she plays both instruments, this provides a unique approach to the live convolution performance situation.
Liveconvolver take 1: Jordan Morton bass and voice
Liveconvolver take 2: Jordan Morton bass and voice
Jordan Morton and Miller Puckette
Liveconvolver tracks Jordan/Miller:
These tracks was improvised by Jordan Morton (bass) and Miller Puckette (guitar). Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Miller records the IR.
Improvised liveconvolver performance, Jordan Morton (bass) and Miller Puckette (guitar). Jordan records the IR.
Discussion on the performance with live convolution, with Jordan Morton and Miller Puckette.
Miller Puckette and Steven Leffue
These tracks was improvised by Miller Puckette (guitar) and Steven Leffue. The feature-modulator mapping was designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping before the first take. The intention of this strategy was to create an naturally flowing environment of exploration, with not-too-obvious relationships between instrumental gestures and resulting modulations. After the first take, some more detail of selected elements (one for each musician) of the mapping were repeated for the performers, with the anticipation that these features might be explored more consciously.
Take 1:
Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 1. Details of the feature-modulator mapping is given below.
Discussion 1 on the crossadaptive performance, with Miller Puckette and Steven Leffue. On the relationship between what you play and how that modulates the effects, on balance of monitoring, and other issues.
The effects used:
Effects on guitar: Spectral delay
Effects on sax: Resonant distorted lowpass, Spectral shift, Reverb
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Guitar envelope crest: longer reverb time on sax
Purpose: dynamic guitar playing will make a big room for the sax
Guitar transient density: higher cutoff frequency for reverb highpass filter
and
lower cutoff frequency for reverb lowpass filter
Purpose: when guitar is more active, the reverb on sax will be less full (less highs and less lows)
Guitar transient density (again): downward spectral shift on sax
Purpose: animation and variation
Guitar spectral flux: higher cutoff frequency (resonant distorted lowpass filter) on sax
Purpose: just for animation and variation. Note that spectral flux (especially on the guitar) will also give high values on single notes in the low register (the lowest octave), in addition to the expected behaviour of giving higher values on more noisy sounds.
Sax envelope crest: less delay send on guitar
Purpose: more dynamic sax playing will “dry up” the guitar delays, must play long notes to open the sending of guitar to delay
Sax transient density: longer delay time on guitar. This modulation mapping was also gated by the rms amplitude of the sax (so that it is only active when sax gets loud)
Purpose: load and fast sax will give more distinct repetitions (further apart) on the guitar delay
Sax pitch: increase spectral delay shaping of the guitar (spectral delay with different delay times for each spectral band)
Purpose: more unnatural (crazier) effect on guitar when sax goes high
Sax spectral flux: more feedback on guitar delay
Purpose: noisy sax playing will give more distinct repetitions (more repetitions) on the guitar delay
Take 2:
Crossadaptive improvisation with Miller Puckette (guitar) and Steven Leffue (sax). Take 2. The feature-modulator mapping was the same as for take 1.
Discussion 2 on the crossadaptive performance, with Miller Puckette and Steven Leffue. Instructions and intellectualizing the mapping made it harder
Liveconvolver tracks:
Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Miller records the IR.
Discussion 1 on playing with the live convolver, with Miller Puckette and Steven Leffue.
Improvised liveconvolver performance, Miller Puckette (guitar) and Steven Leffue (sax). Steven records the IR.
Discussion 2 on playing with the live convolver, with Miller Puckette and Steven Leffue.
Steven Leffue and Kyle Motl
Two different feature-modulator mappings was used, and we present one take of each mapping. Like the mappings used for Miller/Steven, these were designed to enable a rich interaction scenario for the performers to explore in their improvisation. The musicians were given only a very brief introduction to the specifities of the mapping. The mapping used for the first take closely resembles the mapping for Steven/Miller, with just a few changes to accomodate for the different musical context and how the analysis methods responds to the instruments.
Bass transient density: shorter reverb time on sax
The reverb equalization (highpass and lowpass was skipped
Bass envelope crest: increase send level for granular processing on sax
Bass rms amplitude: Parametric morph between granular tremolo and granular time stretch on sax
I
n the first crossadaptive take in this duo, Kyle commented that the amount of delay made it hard to play, that any fast phrases just would turn into a mush. It seemed the choice of effects and the modulations was not optimal, so we tried another configuration of effects (and thus another mapping of features to modulators)
This mapping had earlier been used for duo playing between Kyle (bass) and Øyvind (vocal) on several occations, and it was merely adjusted to accomodate for the different timbral dynamics of the saxophone. In this way, Kyle was familiar with the possibilities of the mapping, but not with the context in which it would be used.
The granular processing done on both instrument was done with the Hadron Particle Synthesizer, which allows a multidimensional parameter navigation through a relatively simple modulation interface (X, Y and 4 expression controllers). The specifics of the actual modulation routing and mapping within Hadron can be described, but it was thought that in the context of the current report, further technical detail would only take away from the clarity of the presentation. Even though the details of the parameter mapping was designed deliberately, at this point in the performative approach to playing with it, we just did no longer pay attention to technical specifics. Rather, the focus was on letting go and trying to experience the timbral changes rather than intellectualizing them.
The effects used:
Effects on sax: Delay, granular processing
Effects on bass: Reverb, granular processing
The features and the modulator mappings:
(also stating an intended purpose for each mapping)
Sax envelope crest: shorter reverb time on bass
Sax rms amp: higher cutoff frequency for reverb highpass filter
Purpose: louder sax will make the bass reverb thinner
Sax transient density: lower cutoff frequency for reverb lowpass filter
Sax envelope dynamics (dynamic range): higher cutoff frequency for reverb lowpass filter
Purpose: faster sax playing will make the reverb less prominent, but more dynamic playing will enhance it
Sax spectral flux: Granular processing state morph (Hadron X-axis) on bass
Sax envelope dynamics: Granular processing state morph (Hadron Y-axis) on bass
Sax rms amplitude: Granular processing state morph (Hadron Y-axis) on bass
Purpose: animation and variation
Bass spectral flatness: higher cutoff frequency of the delay feedback path on sax
Purpose: more noisy bass playing will enhance delayed repetitions
Bass envelope dynamics: less delay feedback on sax
Purpose: more dynamic playing will give less repetitions in delay on sax
Bass pitch: upward spectral shift on sax
Purpose: animation and variation, pulling in same direction (up pitch equals shift up)
Bass transient density: Granular process expression 1 (Hadron) on sax
Bass rms amplitude: Granular process expression 2 & 3 (Hadron) on sax
Bass rhythmic irregularity: Granular process expression 4 (Hadron) on sax
Bass MFCC diff: Granular processing state morph (Hadron X-axis) on sax
Bass envelope crest: Granular processing state morph (Hadron Y-axis) on sax
Purpose: multidimensional and rich animation and variation
On the second crossadaptive take between Steven and Kyle, I asked: “Does this hinder interaction or does or make something interesting happen?”
Kyle says it hinders the way they would normally play together. “We can’t go to our normal thing because there’s a third party, the mediation in between us. It is another thing to consider.” Also, the balance between the acoustic sound and the processing is difficult. This is even more difficult when playing with headphones, as the dynamic range and response is different. Sometimes the processing will seem very quiet in relation to the acoustic sound of the instruments, and at other times it will be too loud.
Steven says at one point he started
not
paying attention to the processing and focused mostly on what Kyle was doing. “Just letting the processing be the reaction to that, not treating it as an equal third party. … Totally paying attention to what the other musician is doing and just keeping up with him, not listening to myself.” This also mirrors the usual options of improvisational listening strategy and focus, of listening to the whole or focusing on specific elements in the resulting sound image.
Longer reflective conversation between Steven Leffule, Kyle Motl and Øyvind Brandtsegg. Done after the crossadaptive feature-modulator takes, touching on some of the problems encountered, but also reflecting on the wider context of different kinds of music accompaniment systems.
Liveconvolver tracks:
Each of the musicians was given the role of “impulse response recorder” in turn, while the other then played through the convolver effect.
Discussion 1 on playing with the live convolver, with Steven Leffue and Kyle Motl.
Discussion 2 on playing with the live convolver, with Steven Leffue and Kyle Motl.