Skype on philosophical implications

Skype with Solveig Bøe, Simon Emmerson and Øyvind Brandtsegg.

Our intention for the conversations was to start sketching out some of the philosophical implications of our project. Partly as a means to understand what we are doing and what it actually means, and partly as a means of informing our further work, choosing appropriate directions and approaches. Before we got to these overarching issues, we also discussed some implications of the very here and now of the project. We come back to the overarching issues in the latter part of this post.

Rhythm

Initially, we commented on rhythm analysis not based on the assumption of an underlying constant meter or pulse. Work of Ken Fields was mentioned, on internet performance and the nature of time. Also John Young’s writing on form and phrase length (in Trevor Wishart’s music); Ambrose Seddon’s chapter on time structures in Emmerson/Landy’s recent book “Expanding the Horizon of Electroacoustic Music Analysis”. In live performance there is also George Lewis’ rhythm analysis (or event analysis) in his interactive pieces (notably Voyager ). See https://www.amherst.edu/media/view/58538/original/Lewis%2B-%2BToo%2BMa

Score as guide

In the context of Oeyvind being at UCSD, where one of their very strong fields of competence is contemporary music performance, he thought loosely about a possible parallel between a composed score and the cross-adaptive performance setting. One could view the situation designed for/around the performers in a cross-adaptive setting as a composition , in terms of it setting a direction for the interplay and thus also posing certain limits (or obstacles/challenges) to what the performer can and cannot do. On the possible analogy between the traditional score and our crossadaptive performance scenario, one could flag the objection that a classical performer not so much just follow the do’s and don’ts described  in a score but rather uses the score as a means to engage in the composer’s intention. This may or may not apply to the musics created in our project, as we strive to not limit ourselves to specific musical styles, still leaning solidly against the quite freely improvised expression in a somewhat electroacoustic timbral environment. In some of our earlier studio sessions , we noted that there is two competing modes of attention:

  • To control an effect parameter with one’s audio signal, or
  • to respond musically to the situation

… meaning that what the performer chooses to play would traditionally be based on an intended musical statement to contribute to the current musical situation, whereas now she might rather play something that (via its analyzed features) will modulate the sound of another instrument in the ensemble. This new something, being a controller signal rather than a self-contained musical statement, might actually express something of its own in contradiction to the intended (and assumedly achieved) effect it has on changing the sound of another instrument. Now on top of this consider that some performers (Simon was quoting the as yet unpublished work of postgraduate and professional bass clarinet performer Marij van Gorkom) experience the use of electronics as perhaps disturbing the model of engagement with the composer’s intention through the written score. This being something that one needs to adjust to and accommodate on top of the other layers of mediation. Then we might consider our crossadaptive experiments as containing the span of pain to love in one system .

…and then to philosophy

A possible perspective suggested by Oeyvind was a line connecting Kant-Heidegger-Stiegler, possibly starting offside but nevertheless somewhere to start. The connection to Kant being his notion of the noumenal world (where the things in themselves are) and the phenomenal world (which is where we can experience and to some extent manipulate the phenomena). Further, to Heidegger with his thoughts on the essence of technology. Rather than viewing technology as a means to an end (instrumental), or as a human activity (anthropological), he refers to a bringing-forth or a bringing out of concealment into unconcealment. Heidegger then assumes technology as a threat to this bringing-forth (of truth), bypassing or denying human active participation (with the real events, i.e. the revealing of truth). Again, he suggests that art can be a way of navigating this paradox and actively sidestepping the potential threat of technology. Then onto Stiegler with his view of technics as organized inorganic matter, somewhat with an imminent drive or dynamic of its own.  This possibly constituting an impediment to socialization, individuation and to intersubjectivization.  In hindsight (after today’s discussion), perhaps a rather gray, or bleak approach to the issues of technology in the arts. In the context of our project, it was intended to portray the difficulty of controlling something with something else , the reaching into another dimension while acting from the present one.

Solveig countered with a more phenomenological approach, bringing Merleau-Ponty’s perspectives of technology as an extension of the body, of communication and communion, of learning and experiencing together , acting on a level below language. On this level the perceptive and the corporal – included extensions that are incorporated into the body schemes, are intertwined and form a unity. Now this seems like a path of further investigation.  One could easily say that we are learning to play a new instrument, or learning to navigate a new musical environment, so our current endeavors must be understood as baby steps in this process (as also reflected on earlier ), and we are this intertwined unity taking form. In this context, also Merleau-Ponty’s ideas about  the learning of language as something that takes place by learning a play of roles, where understanding how to change between roles is an essential part of the learning process, comes into view as appropriate.

Simon also reported on another research student project: cellist Audrey Riley was a long-time member of the Merce Cunningham ensemble and is engaged in examining a (post-Cage) view of performance – including ideas of ‘clearing the mind’ of thought and language – as they might ‘get in the way’.  This relates to the idea of ‘communion’ where there is a sense of wholeness and completeness.  This also leads us to Wittgenstein’s ideas that both in art and philosophy there is something that is unutterable, that may just be demonstrated.

How does this affect what we do?

We concluded our meeting with a short discussion on how the philosophical aspects can be set into play to aid our further work in the project. Is it just high-and-wide ramblings or do we actually make use of it? The setting up of some dimensions of tension, or oppositions (that may not be oppositions but rather different perspectives on the same thing sometimes) may be of help in practically approaching the problems we face. Like the assumed opposition of communion – disturbance, does the crossadaptive interventions into the communication between performers act as a disturbance, or can it be a way of enabling communion? Can we increase the feeling of connectivity? We can also use these lines of thought to describe what happened (in an improvisation), or try to convey the implications of the project as a whole.  Most importantly however, is probably the growth of a vocabulary for reflection through the probing of our own work in the light of the philosophic implications.

Analyzer: plotting and new parameters

During the last few weeks, I’ve added some new things to the analyzer. Some new feature extraction parameters, some small fixes, and also a 2D plotting of parameters. The plotting makes it much easier to see correlations between extracted features, and as such is valuable both to familiarize oneself with the feature extraction methods, but also in the work of cleaning out redundant analysis parameters. First to the new parameters:

Envelope crest factor : This is what one would normally call just crest factor in audio engineering, but since we use the same kind of measure on different dimensions, we will use envelope crest factor or jus envelope crest as its name. The crest factor is technically the peak value divided by the RMS value, in this case of the amplitude envelope. This ratio of the peak to the average value gives an indication of the range of activity. In our project we also measure the crest factor of other dimensions, like the spectral crest, and the crest of the rhythmic autocorrelation. For our purposes, we can use the envelope crest factor to determine the “percussiveness” of a signal; if the audio signal is dry and staccato, with short attacks and clear pauses between them, then the envelope crest will be high. For sustained tones (and for silence), the envelope crest will be low. The initial experimentation with this parameter has led med to wish for another variant of it, an active dynamic range analysis, where one could distinguish between clear staccato rhythms with a high degree of dynamics (as opposed to staccato rhythms with a stable/steady dynamics).

Transient density: Now got a better algorithm for calculating this analysis parameter. It reflects the number of transients per second, and will naturally fluctuate a bit. A filter with fast rise and slow decay time has also been applied to it, so it will slowly dwindle back to zero when activity stops.

Plotting

The analyzer now has a 2D plotting area, inspired from seeing that Miller Puckette did something similar when we experimented with some analysis methods in PD. The plot does not have a control function, so does not actually produce any modulation data by itself, rather we can use it to look at how the signals behave over time. We can also see how different analysis features correspond to different kinds of playing, leaving different traces in the plot. The ability to see how much the different analysis features correlate also makes it easier to find which features are relevant for use as modulators and which ones perhaps is redundant. We can plot signals along 3 dimensions: x, y, and colour. The X axis goes from left to right, the Y axis from bottom to top, and the colour follows the rainbow from red to blue (maxing out at violet).

Plotting is enabled by clicking the button “not” (changing it into “plot”), and it can be cleared with the clear button. The plot has a set maximum of items it can plot (currently 200, although this can be set freely in the analyzer.csd code). When the maximum number of items has been plotted, we begin re-using the available items. This creates a natural decay of the plotted values, as older values (200 measurements ago) will be replaced by more recent ones. The plot update method can be set to be periodic (metro), with a selectable update rate (number of points per second). Alternatively, it can be set to plot values for every transient in the audio signal. In the case of the transient triggered plotting, the features of interest may not have stabilized at the exact time of the transient. For this reason we added a selectable delay (plot the value reached N number of milliseconds after the transient). Here’s some screenshots; First two situations of long, quite steady, held notes; Then two situations of staccato fluctuating melodies. The envelope crest is plotted on the X axis, the pitch on the Y axis, and the transient density represented by colour.

analyzer_plot_steadynotes_both
Plot of two situations with long steady notes

analyzer_plot_staccatonotes_both
Plot of two different situations with staccato fluctuating notes