Feature extraction – Cross adaptive processing as musical intervention http://crossadaptive.hf.ntnu.no Exploring radically new modes of musical interaction in live performance Tue, 27 Nov 2018 13:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 116975052 Session with Michael Duch http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/ http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/#respond Thu, 22 Feb 2018 10:11:02 +0000 http://crossadaptive.hf.ntnu.no/?p=1190 Continue reading "Session with Michael Duch"]]> February 12th, we did a session with Michael Duch on double bass, exploring auto-adaptive use of our techniques. We were interested in seeing how the crossadaptive techniques could be used for personal timbral expansion for a single player. This is a step back in complexity from the crossadaptive interplay, but is interesting for two reasons: One is to check how useful our techniques of modulation is in a setting with more traditional performer control. Where there is only one performer modulating himself, there is a closer relationship between performer intention and timbral result. And two: the reason to do this specifically with Michael is that we know from his work with Lemur and other settings that he intently and intimately relates to the performance environment, the resonances of the room and the general ambience. Due to this focus, we also wanted to use live convolution techniques where he first records an impulse response and then himself play through the same filter. This exposed one feature needed in the live convolver, where one might want to delay the activation of the new impulse response until its recording is complete (otherwise we most certainly will get extreme resonances while recording, since the filter and the exitation is very similar). That technical issue aside, it was musically very interesting to hear how he explored resonances in his own instrument, and also used small amounts of detuning to approach beating effects in the relation between filter and exitation signal. The self-convolution also exposes parts of the instrument spectrum that usually is not so noticeable, like bassy components of high notes, or prominent harmonics that otherwise would be perceptually masked by their merging into the full tone of the instrument.

 

2018_02_Michael_test1
 2018_02_Michael_test1

Take 1,  autoadaptive exploration

2018_02_Michael_test2
 2018_02_Michael_test2

Take 2,  autoadaptive exploration

Self convolution

2018_02_Michael_conv1
 2018_02_Michael_conv1

Self-convolution take 1

2018_02_Michael_conv2
 2018_02_Michael_conv2

Self-convolution take 2

2018_02_Michael_conv3
 2018_02_Michael_conv3

Self-convolution take 3

2018_02_Michael_conv4
 2018_02_Michael_conv4

Self-convolution take 4

2018_02_Michael_conv5
 2018_02_Michael_conv5

Self-convolution take 5

2018_02_Michael_conv6
 2018_02_Michael_conv6

Self-convolution take 6

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2018/02/22/session-with-michael-duch/feed/ 0 1190
Crossadaptive seminar Trondheim, November 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/#respond Sun, 05 Nov 2017 18:27:58 +0000 http://crossadaptive.hf.ntnu.no/?p=1093 Continue reading "Crossadaptive seminar Trondheim, November 2017"]]> As  part of the ongoing research on crossadaptive processing and performance, we had a very productive seminar in Trondheim 2. and 3. November (2017). The current post will show the program of presentations, performances and discussions and provide links to more detailed documentation of each session as it becomes available. Each part will be added as the documentations is ready, so if something is missing now, do check back later. The seminar was also streamed here, and the recorded streams will be archived.

In addition to the researchers presenting, we also had an audience of students from the music technology and the jazz departments, as well as other researchers and teachers from NTNU. We are grateful for the input from the audience to enrich our discussions.

Program:

Thursday 2. November

Practical experiments

Introduction and status. [slides]
Øyvind Brandtsegg

 

Performance
Maja S.K. Ratkje, Øyvind Brandtsegg, Miller Puckette (standin for Stian Westerhus)

 

Work methods and session reports. Experiences, insights, reflections.
Trond Engum (with team)[slides],  Bernt Isak Wærstad (with team) [slides]

Instruments and tools

Instrumental affordances, crossadaptivity as instrumental gesture.
Marije Baalman [slides]

 


Performance
Tone Åse, Carl Haakon Waadeland, Trond Engum

 

Instrument design and technological developments. [slides]
Sigurd Saue, Victor Lazzarini, Øyvind Brandtsegg

 

Friday 3. November

Reflection. Aesthetic and philosophical issues

Documentation methods [slides]
Andreas Bergsland


Performance
Bjørnar Habbestad, Gyrid N. Kaldestad, Bernt Isak Wærstad

 

What does it mean for the performer, for the audience, for the music? How does it change the game?
Solveig Bøe [notes], Simon Emmerson [slides]

Wider use and perspectives

Experiences with Philippe Manoury and Juliana Snapper, thoughts on instrumental control, and a performance
Miller Puckette [PD patches]
(with Øyvind Brandtsegg for a brief liveconvolver performance)

 

Looking at the music from the mix perspective. Viability of effects as expression. The wider field of automatic mixing and adaptive effects.
Gary Bromham [slides], Josh Reiss [slides]

 

Outcomes and evaluation. [slides]
Moderator: Øyvind Brandtsegg

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/11/05/crossadaptive-seminar-trondheim-november-2017/feed/ 0 1093
Session with Jordan Morton and Miller Puckette, April 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/09/session-with-jordan-morton-and-miller-puckette-april-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/09/session-with-jordan-morton-and-miller-puckette-april-2017/#respond Fri, 09 Jun 2017 20:55:58 +0000 http://crossadaptive.hf.ntnu.no/?p=941 Continue reading "Session with Jordan Morton and Miller Puckette, April 2017"]]> This session was conducted as part of preparations to the larger session in UCSD Studio A, and we worked on calibration of the analysis methods to Jordans double bass and vocals. Some of the calibration and accomodation of signals also includes the fun creative work of figuring out which effects and which effect parameters to map the analyses to. The session resulted in some new discoveries in this respect, for example using the spectral flux of the bass to control vocal reverb size, and using transient density to control very low range delay time modulations. Among the issues we discussed were aspects of timbral polyphony, i.e. how many simultaneous modulations can we percieve and follow?

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/09/session-with-jordan-morton-and-miller-puckette-april-2017/feed/ 0 941
Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/#comments Wed, 07 Jun 2017 12:47:40 +0000 http://crossadaptive.hf.ntnu.no/?p=759 Continue reading "Second session at Norwegian Academy of Music (Oslo) – January 13. and 19., 2017"]]> Participants: Bjørnar Habbestad (flute), Bernt Isak Wærstad (guitar), Gyrid Nordal Kaldestad (voice)

The focus for this session was to play with, fine tune and work further on the mappings we sat up during the last session at NMH in November. Due to practical reasons, we had to split the session into two half days on 13th and 19th of January

13th of January 2017

We started by analysing 4 different musical gestures for the guitar, which was skipped due to time constraints during the last session. During this analysis we found the need to specify the spread of the analysis results in addition to the region. This way we could differentiate the analysis results in terms of stability and conclusiveness. We decided to analyse the flute and vocal again to add the new parameters.

19th of January 2017

After the analysis was done, we started working on a mapping scheme which involved all 3 instruments, so that we could play in a trio setup. The mappings between flute and vocal where the same as in the November session

The analyser was still run in Reaper, but all routing, effects chain and mapping (MIDIator) was now done in Live. Because of software instability (the old Reaper projects from November wouldn’t open) and change of DAW from Reaper to Live, meant that we had to set up and tune everything from scratch.

Sound examples with comments and immediate reflections

1. Guitar & Vocal – First duo test, not ideal, forgot to mute analyser.

2. Guitar & Vocal retake – Listened back on speakers after recording. Nice sounding. Promising.

Reflection: There seems to be some elements missing, in a good way, meaning that there is space left for things to happen in the trio format. There is still need for fine-tuning of the relationship between guitar and vocal. This scenario stems from the mapping being done mainly with the trio format in mind.

3. Vocals & flute – Listened back on speakers after recording.

Reflections: dynamic soundscape, quite diverse results, some of the same situations as with take 2, the sounds feel complementary to something else. Effect tuning: more subtle ring mod (good!) compared to last session, the filter on vocals is a bit too heavy-handed. Should we flip the vocal filter? This could prevent filtering and reverb taking place simultaneously. Concern: is the guitar/vocal relationship weaker compared to vocal/flute? Another idea comes up – should we look at connecting gates or bypasses in order to create dynamic transitions between dry and processed signals?

4.Flute & Guitar

Reflections: both the flute ring mod and git delay are a bit on the heavy side, not responsive enough. Interesting how the effect transformations affect material choices when improvising.

5.Trio

Comments and reflections after the recording session

It is interesting to be in a situation where you, as you play, are having multi-layered focuses- playing, listening, thinking of how you affect the processing of your fellow musicians and how your sound is affected and trying to make something worth listening to. Of course we are now in an “etyde- mode”, but still striving for the goal, great output!

It seems to be a bug in the analyser tool when it comes to being consistent. Sometimes some parameters fall out. We experienced that it seems to be a good idea to run the analyse a couple of times for each sound to get the most precise result.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/second-session-at-norwegian-academy-of-music-oslo-january-13-and-19-2017/feed/ 2 759
Seminar on instrument design, software, control http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/ http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/#respond Fri, 24 Mar 2017 20:33:12 +0000 http://crossadaptive.hf.ntnu.no/?p=776 Continue reading "Seminar on instrument design, software, control"]]> Online seminar March 21

Trond Engum and Sigurd Saue (Trondheim)
Bernt Isak Wærstad (Oslo)
Marije Baalman (Amsterdam)
Joshua Reiss (London)
Victor Lazzarini (Maynooth)
Øyvind Brandtsegg (San Diego)

Instrument design, software, control

We now have some tools that allow practical experimentation, and we’ve had the chance to use them in some sessions. We have some experience as to what they solve and don’t solve, how simple (or not) they are to use. We know that they are not completely stable on all platforms, there are some “snags” on initialization and/or termination that give different problems for different platforms. Still, in general, we have just enough to evaluate the design in terms of instrument building, software architechture, interfacing and control.

We have identified two distinct modes of working crossadaptively: The Analyzer-Modulator workflow, and a Direct-Cross-Synthesis workflow. The Analyzer-Modulator method is comprised of extracting features, and arbitrarily mapping these features as modulators to any effect parameter. The Direct-Cross-Synthesis method is comprised by a much closer interaction directly on the two audio signals, for example as seen with the liveconvolver and or different forms of adaptive resonators. These two methods give very different ways of approaching the crossadaptive interplay, with the direct-cross-synthesis method being perceived as closer to the signal, and as such, in many ways closer to each other for the two performers. The Analyzer-Modulator approach allows arbitrary mappings, and this is both a strngth and a weakness. It is powerful by allowing any mapping, but it is harder to find mappings that are musically and performatively engaging. At least this can be true when a mapping is used without modification over a longer time span. As a further extension, an even more distanced manner of crossadaptive interplay was recently suggested by Lukas Ligeti (UC Irvine, following Brandtsegg’s presentation of our project there in January). Ligeti would like to investigate crossadaptive modulation on MIDI signals between performers. The mapping and processing options for event-based signals like MIDI would have even more degrees of freedom than what we achieve with the Analyzer-Modulator approach, and it would have an even greater degree of “remoteness” or “disconnectedness”. For Ligeti, one of the interesting things is the diconnectedness and how it affects our playing. In perspective, we start to see some different viewing angles on how crossadaptivity can be implemented and how it can influence communication and performance.

In this meeting we also discussed problems of the current tools, mostly concerned with the tools of the Analyzer-Moduator method, as that is where we have experienced the most obvious technical hindrance for effecttive exploration. One particular problem is the use of MIDI controller data as our output. Even though it gives great freedom in modulator destinations, it is not straightforward for a system operator to keep track of which controller numbers are actively used and what destinations they correspond to. Initial investigations of using OSC in the final interfacing to the DAW have been done by Brandtsegg, and the current status of many DAWs seems to allow “auto-learning” of OSC addresses based on touching controls of the modulator destination within the DAW. a two-way communication between the DAW and ouurr mapping module should be within reach and would immensely simplify that part of our tools.
We also discussed the selection of features extracted by the Analyzer, whic ones are more actively used, if any could be removed and/or if any could be refined.

Initial comments

Each of the participants was invited to give their initial comments on these issues. Victor suggests we could rationalize the tools a bit, simplify, and get rid of the Python dependency (which has caused some stability and compatibility issues). This should be done without loosing flexibility and usability. Perhaps a turn towards the originally planned direction of reying basically on Csound for analysis instead of external libraries. Bernt has had some meetings with Trond recently and they have some common views. For them it is imperative to be able to use Ableton Live for the audio processing, as the creative work during sessions is really only possible using tools they are familiar with. Finding solutions to aesthetic problems that may arise require quick turnarounds, and for this to be viable, familiar processing tools.  There have been some issues related to stability in Live, which have sometimes significantly slowed down or straight out hindered an effective workflow. Trond appreciates the graphical display of signals, as it helps it teaching performers how the analysis responds to different playing techniques.

Modularity

Bernt also mentions the use of very simple scaled-down experiments directly in Max, done quickly with students. It would be relatively easy to make simple patches that combines analysis of one (or a few) features with a small number of modulator parameters. Josh and Marije also mentions modularity and scaling down as measures to clean up the tools. Sigurd has some other perspectives on this, as it also relates to what kind of flexibility we might want and need, how much free experimentation with features, mappings and desintations is needed, and also to consider if we are making the tools for an end user or for the research personell within the project. Oeyvind also mentions some arguments that directly opposes a modular structure, both in terms of the number of separate plugins and separate windows needed, and also in terms of analyzing one signal with relation to activity in another (f.ex. for cross-bleed reduction and/or masking features etc).

Stability

Josh asks about the stability issues reported. any special feature extractors, or other elements that have been identified that triggers instabilities. Oeyvind/Victor discuss a bit about the Python interface, as this is one issue that frequently come up in relation to compatibility and stability. There are things to try, but perhaps the most promising route is to try to get rid of the Python interface. Josh also asks about the preferred DAW used in the project, as this obviously influence stability. Oeyvind has good experience with Reaper, and this coincides with Josh’s experience at QMUL. In terms of stability and flexibility of routing (multichannel), Reaper is the best choice. Crossadaptive work directly in Ableton Live can be done, but always involve a hack. Other alternatives (Plogue Bidule, Bitwig…) are also discussed briefly. Victor suggests selecting a reference set of tools, which we document well in terms of how to use them in our project. Reaper has not been stable for Bernt and Trond, but this might be related to  setting of specific options (running plugins in separate/dedicated processes, and other performance options). In any case, the two DAWs of immediate practical interest is Reaper (in general) and Live (for some performers).  An alternative to using a DAW to host the Analyzer might also be to create a standalone application, as a “server”, sending control signals to any host. There are good reasons for keeping it within the DAW, both as session management (saving setups)  and also for preprocessing of input signals (filtering, dynamics, routing).

Simplify

Some of the stability issues can be remedied by simplifying the analyzer, getting rid of unused features, and also getting rid of the Python interface. Simplification will also enable use for less trained users, as it enable self-education and ability to just start using it and experiment. Modularity might also enhance such self-education, but a take on “modularity” might simply hiding irrelevant aspects of the GUI.
In terms of feature selection the filtering of GUI display (showing only a subset) is valuable. We see also that the number of actively used parameters is generally relatively low, our “polyphonic attention” for following independent modulations generally is limited to 3-4 dimensions.
It seems clear that we have some overshoot in terms of flexibility and number of parameters in the current version of our tools.

Performative

Marije also suggests we should investigate further what happens on repeated use. When the same musicians use the same setup several times over a period of time, working more intensively, just play, see what combinations wear out and what stays interesting. This might guide us in general selection of valuable features. Within a short time span (of one performence), we also touched briefly on the issue of using static mappings as opposed to changing the mapping on the fly. Giving the system operator a more expressive role, might also solve situations where a particular mapping wears our or becomes inhibiting over time. So far we have created very repeatable situations, to investigate in detail how each component works. Using a mapping that varies over time can enable more interesting musical forms, but will also in general make the situation more complex. Remembering how performers in general can respond positively to a certain “richness” of the interface (tolerating and even being inspired by noisy analysis), perhaps varying the mapping over time also can shift the attention more on to the sounding result and playing by ear holistically, than intellectually dissecting how each component contributes.
Concluding remarks also suggests that we still need to play more with it, to become more proficient, having more control, explore and getting used to (and getting tired of) how it works.

 

 

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/03/24/seminar-on-instrument-design-software-control/feed/ 0 776
Conversation with Marije, March 2017 http://crossadaptive.hf.ntnu.no/index.php/2017/03/20/conversation-with-marije-march-2017/ http://crossadaptive.hf.ntnu.no/index.php/2017/03/20/conversation-with-marije-march-2017/#respond Mon, 20 Mar 2017 21:34:40 +0000 http://crossadaptive.hf.ntnu.no/?p=760 Continue reading "Conversation with Marije, March 2017"]]> After an inspiring talk with Marije on March 3rd, I set out to write this blog post to sum up what we had been talking about. As it happens (and has happened before), Marije had a lot of pointer to other related works and writings. Only after I had looked at the material she pointed to, and reflected upon it, did I get around to writing this blog post. So substantial parts of it contains more of a reflection after the conversation, rather than an actual report of what was said directly.
Marije mentiones we have done a lot of work, it is inspiring, solid, looks good.

Agency, focus of attention

One of the first subjects in our conversation was how we relate to the instrument. For performers: How does it work? Does it work? (does it do what we say/think it does?) What do I control? What controls me? when successful it might constitute a 3rd agency, a shared feeling, mutual control. Not acting as a single musician, but as an ensemble. The same observation can of course be made (when playing) in acoustic ensembles too, but it is connected differently in our setting.

Direct/indirect control. Play music or generate control signals? Very direct and one-dimensional mappings can easily feel like playing to generate control signals. Some control signals can be formed (by analyzing) over longer time spans, as they represent more of a “situation” than an immediate “snapshot”. Perhaps just as interesting for a musician to outline a situation over time, than to simply control one sonic dimension by acting on another?

Out-of-time’d-ness, relating to the different perceptions of the performative role experienced in IR recording (see posts on convolution sessions here, here and here). A similar experience can be identified within other forms of live sampling. to some degree recognizable with all sorts of live processing as an instrumental activity. For the live processing performer: a detached-ness of control as opposed to directly playing each event.

Contrived and artificial mappings. I asked whether the analyzer-modulation mappings are perhaps too contrived, too “made up”? Marije replying that everything we do with electronic music instrument design (and mapping) is to some degree made up. It is always artibrary, design decisions, something made up. There is not one “real” way, no physical necessity or limitation that determines what the “correct” mapping is. As such, there are only mappings that emphasize different aspects of performance and interaction, new ideas that might seem “contrived” can contain yet-to-be-seen areas of such emphasis. Composition is in these connections. For longer pieces one might want variation in mapping. For example, in the combined instrument created by voice and drums in some of our research sessions. Depending on combination and how it is played, the mapping might wear out over time, so one might want to change it during one musical piece.

Limitation. In January I did a presentation at UC Irvine, for an audience well trained in live processing and electronic music performance. One of the perspectives mentioned there was that the cross-adaptive mapping could also be viewed as a limitation. One could claim that all of these modulations that we can perform cross-adaptively could have been manually controlled, an with much more musical freedom if manually controlled. Still, the crossadaptive situation provides another kind of dynamic. The acoustic instrument is immediate and multidimensional, providing a nuanced and intuitive interface. We can tap into that. As an example as to how the interfacne changes the expression, look at how we (Marije) use accelerometers over 3 axes of motion: one could produce the same exact same control signals using 3 separate faders, but the agency of control, the feeling, the expressivity, the dynamic is different with accelerometers that it is with faders. It is different to play, and this will produce different results. The limitations (of an interface or a mapping) can be viewed as something interesting, just as much as something that inhibits.

Analyzer noise and flakyness

One thing that have concerned me lately is the fact that the analyzer is sometimes too sensitive to minor variations in the signal. Mathematical differences sometimes occur on a different scale than the expressive differences. One example is the rhythm analyzer, the way I think it is too noisy and unreliable, seen in the light of the practical use in session, where the musicians found it very appropriate and controllable.
Marije reminds me that in the live performance setting, small accidents and surprises are inspiring. In a production setting perhaps not so much. Musicians are trained to embrace the imperfections of, and characteristic traits of their instument, so it is natural for them to also respond in a similar manner to imperfections in the adaptive and crossadaptive control methods. This makes me reflect if there is a research methodology of accidents(?), on how to understand the art of the accident, understand the failure of the algorithm, like in glitch, circuit bending, and other art forms relating to distilling and refining “the unwanted”.

Rhythm analysis

I will refine the rhythm analysis, it seems promising as a measure
of musical expressivity. I have some ideas of maintaining several parallel hypotheses on how to interpret input, based on previous rhythm research. some of this comes from “Machine Musicianship” by Robert Rowe, some from readin a UCSD dissertation by Michelle L. Daniels: “An Ensemble Framework for Real-Time Beat Tracking”. I am currently trying to distill these into a simplest possible method of rhythm analysis for our purposes. So I ask Marije on ideas on how to refine the rhythm analyzer. Rhythm can be one parameters that outlines “a situation” just as much as it creates a “snapshot” (recall the discussion of agency and direct/indirect control, above). One thing we may want to extract is slower shifts, from one situation to another. My concerns that it takes too long to analyze a pattern (well, at least as long as the pattern itself, which might be several seconds) can then be regarded less of a concern, since we are not primarily looking for immediate output. Still, I will attempt to minimize the latency of rhythm analysis, so that any delay in response is due to aestethic choice, and not so much limited by the technology. She also mentions the other Nick Collins. I realize that he’s the one behind the bbcut algorithm also found in Csound. I’ve used a lot a long time ago. Collins has written a library for feature extraction within SuperCollider. To some degree there is overlap with feature extraction on our Analyzer plugin. Collins invokes external programs to produce similarity matrices, something that might be useful for our purposes as well, as a means of finding temporal patterns in the input. In terms of rhythm analysis, it is based on beat tracking as is common. While we in our rhythm analysis attempts at *not relying* on beat tracking, we could still perhaps implement it, if nothing else so to use it as a measure of beat tracking confidence (assuming this as a very coarse distinction between beat based and more temporally free music.
Another perspective on rhythm analysis can also perhaps be gained from Clarence Barlow’s interest in ratios. The ratio book is available online, as is a lot of his other writings.  Barlow states “In the case of ametric music, all pulses are equally probable”… which leads me to think that any sort of statistical analysis, frequency of occurence of observed inter-onset times, will start to give indications of “what this is”… to lift it slowly out of the white-noise mud of equal probabilities.

Barlow uses the “Indispensability formula“, for relating the importance of each subdivision within a given meter. Perhaps we could invert this somehow to give a general measure of “subdivided-ness“?. We’re not really interested in finding the meter, but the patterns of subdivision is nonetheless of interest. He also use the “Indigestibility formula” for ratios, based on prime-ness, suggests also a cultural digestability limit around 10 (10:11, 11:12, 12:13 …). I’ve been pondering different ways of ordering the complexity of different integer ratios, such as different trhythmic subdivisions. The indigesibility formula might be one way to approach it, but reading further in the ratio book, the writing of Demetrios E. Lekkas leads me to think of another way to sort the subdivisions into increasing complexity:

Lekkas describes the traditional manner of writing down all rational numbers by starting with 1/1 (p 38), then increasing the numerator by one, then going through all denominators from 1 up to the nominator, skipping fracions that can be simplified since they represent numbers earlier represented. This ordering does not imply any relation to complexity of the ratios produced. If tried to use it as such, one problem with this ordering is that it determines that subdividing in 3 is less complex than subdividing in 4. Intuitively, I’d say a rhythmic subdivision in 3 is more complex than a further subdivision of the first established subdivision in 2. Now, could we, to try to find a measure of complexity, assume that divisions further apart from any previous established subdivision are simpler than the ones creating closely spaced divisions(?). So, when dividing 1/1 in 2, we get a value at 0.5 (in addition to 0.0 and 1.0, which we omit for brevity). Then, trying to decide what is the next further division is the most complex, we try out all possible further subdivision up to some limit, look at the resulting values and their distances to already excisting values.
Dividing in 3 give 0.33 and 0.66 (approx), while dividing in 4 give the (new) values 0.25 and 0.75. Dividing by 5 gives new values at .2 and .4, by 6 is unnecessary as it does not produce any larger distances than already covered by 3. Divide by 7 gives values at .142, 0.285 and .428. Divide by 8 is unnecessary as it does not produce any values of larger distance than the divide by 4.
The lowest distance introduced by dividing in 3 is 0.33 to 0.5, a distance of approx 0.17. The lowest distance introduced by dividing in 4 is from 0.25 to 0.5, a distance of 0.25. Dividing into 4 is thus less complex. Checking the divide by 5 and 7 can be left as an exercise to the reader.
Then we go on to the next subdivision, as we now have a grid of 1/2 plus 1/4, with values at 0.25, 0.5 and 0.75. The next two alternatives (in increasing numeric order) is division by 3 or division by 5. Division by 3 gives a smallest distance (to our current grid) from 0.25 to 0.33 = 0.08. Division by 5 gives a smallest distance from 0.2 to 0.25 = 0.05. We conclude that division by 3 is less complex. But wait, let’s check division by 8 too while we’re at it also here, leaving divide by 6 and 7 as an exercise to the reader). Division by 8, in relation to our current grid (.25, .5, .75) gives a smallest distance of 0.125. This is larger than the smallest distance produced by division in 3 (0.08), so we choose 8 as our next number in increasing order of complexity.
Following up on this method, using a highest subdivision of 8, eventually gives us this order 2,4,8,3,6,5,7 as subdivisions in increasing order of complexity. This coincides with my intuition of rhythmic complexity, and can be reached by the simple procedure outlined above. We could also use the same procedure to determine the exact value of complexity for each of these subdivisions, as a means to create an output “value of complexity” for integer ratios. As a side note to myself, check how this will differ from using Tenney height or Benedetti height as I’ve used earlier in the Analyzer.

On the justification for coming up with this procedure I might lean lightly on Lekkas again: “If you want to compare them you just have to come up with your own intuitive formula…deciding which one is more important…That would not be mathematical. Mind you, it’s non-mathematical, but not wrong.” (Ratio book p 40)
Much of the book relates to ratios as in pitch ratios and tuning. Even though we can view pitch and rhythm as activity within the same scale, as vibrations/activations at different frequencies, the perception of pitch is further complicated by the anatomy of our inner ear (critical bands), and by cultural aspects and habituation. Assumedly, these additional considerations should not be used to infer complexity of rhythmic activity. We can not directly use harmonicity of pitch as a measure of the harmonicity of rhythm, even though it might *to some extent* hold true (and I have used this measure up until now in the Analyzer).

Further writings by Barlow on this subject can also be found in his On Musiquantics. “Can the simplicity of a ratio be expressed quantitatively?” (s 38), related to the indegestability formula. See also how “metric field strength”  (p 44), relates to the indispensability formula. The section from p 38-47 concerns this issue, as well as his “Formulæ for Harmonicity” p 24, (part II), with Interval Size, Ratios and Harmonic Intensity on the following pages. For pitch, the critical bandwidth (p 48) is relevant but we could discuss if not the “larger distance created by a subdivision” as I outlined above is more appropriate for rhythmic ratios.

Instrumentality

The 3Dmin book “Musical Instruments in the 21st Century” explores various notions of what an instrument can be, for example the instrument as a possibility space. Lopes/Hoelzl/de Campo, in their many-fest “favour variety and surprise over logical continuation” and “enjoy the moment we lose control and gain influence”. We can relate this to our recent reflections on how performers in our project thrive in a setting where the analysis meethods are somewhat noisy and chaotic. The essence being they can control the general trend of modulation, but still be surprised and disturbed” by the immediate details. Here we again encounter methods of the “less controllable”: circuit bending, glitch, autopoietic (self-modulating) instruments, meta-control techniques (de Campo), and similarly the XY interface for our own Hadron synthesizer, to mention a few apparent directions. The 3DMIN book also has a chapter by Daphna Naphtali on using live processing as an instrument. She identifies some potential problems about the invisible instrument. One problem, according to Naptali, is that it can be difficult to identify the contribution (of the performer operating it). One could argue that invisibility is not necessarily a problem(?), but indeed it (invisibility and the intangible) is a characteristic trait of the kind of instruments that we are dealing with, be it for live processing as controlled by an electronnic musician, or for crossadaptive processing as controlled by the acoustic musicians.

Marije also has a chapter in this book, on the blurring boundaries between composition, instrument design, and improvisation. …”the algorithm for the translation of sensor data into music control data is a major artistic
area; the definition of these relationships is part of the composition of a piece” Waisvisz 1999, cited by Marije

Using adaptive effects as a learning strategy

In light of the complexity of crossadaptive effects, the simpler adaptive effects could be used as a means of familiarization for both performers and “mapping designers” alike. Getting to know how the analyzer reacts to different source material, and how to map the signals in a musically effective manner. The adaptive use case is also more easily adaptable to a mixing situation, for composed music, and any other kind of repeatable situation. The analyzer methods can be calibrated and tuned more easily for each specific source instrument. Perhaps we could also look at a possible methodology for familiarization, how do we most efficiently learn to know these feature-to-modulator mappings. Revising the literature on adaptive audio effects (Verfaille etc) in the light of our current status and reflections might be a good idea.

Performers utilizing adaptive control

Similarly, it might be a good idea to get on touch with environments and performers using adaptive techniques as part of their setup. Marije reminded me that Jos Zwaanenburg and his students at the Conservatorium of Amsterdam might have more examples of musicians using adaptive control techniques. I met Jos some years ago, and contacted him again via email now. Hans Leeouw is another Dutch performer working with adaptive control techniques.  His 2009 NIME article mentions no adaptive control, but has a beautiful statement on the design of mappings: “…when the connection between controller and sound is too obvious the experience of ‘hearing what you see’ easily becomes ‘cheesy’ and ‘shallow’. One of the beauties of acoustic music is hearing and seeing the mastery of a skilled instrumentalist in controlling an instrument that has inherent chaotic behaviour “. In the 2012 NIME article he mentions audio analyses for control. I Contacted Hans to get more details and updated information about what he is using. Via email he tells that he use noise/sinusoidal balance as a control both for signal routing (trumpet sound routed to different filters), and also to reconfigure the mapping of his controllers (as appropriate for the different filter configuration). He mentions that the analyzed transition from noise to sinusoidal can be sharp, and that additional filtering is needed to geet a smooth transition. A particularly interesting area occurs when the routing and mapping is in this intermediate area, where both modes of processing and mapping are partly in effect.

As an example of on researcher/performer that has explored voice control, Marije mentioned Dan Stowell.
Nor surprisingly, he’s also done his research in the context of QMUL. Browsing his thesis, I note some useful terms for ranking extracted features, as he writes about *perceptual relevance*, *robustness*, and *independence*. His experiments on ranking the different features are not conclusive, as “none of the experiments in themselves will suggest a specific compact feature set”. This indication coincides with our own experience so far as well, that different instruments and different applications require different subsets of features. He does however mention spectral centroid, to be particularly useful. We have initially not used this so much due to a high degree of temporal fluctuation. Similarly, he mentions spectral spread, where we have so far used more spectral flatness and spectral flux. This also reminds me of recent discussions on the Csound list regarding different implementations of the analysis of spectral flux (difference from frame to frame or normalized inverse correlation), it might be a good idea to test the different implementations to see if we can have several variations on this measure, since we have found it useful in some but not all of our application areas. Stowell also mentions log attack time, which we should revisit and see how we can apply or reformulate to fit our use cases. A measure that we haven’t considered so far is delta MFCCs, the temporal variation within each cepstral band. Intuitively it seems to me this couldd be an alternative to spectral flux, even though Stowell have found it not to have a significant mutual information bit (delta MFCC to spectral flux). In fact the Delta MFCCs have little MI with any other features whatsoever, although this could be related to implementation detail (decorrelation). He also finds that Delta MFCC have low robustness, but we should try implementing it and see what it give us. Finally, he also mentions *clarity* as a spectral measure, in connectino to pitch analysis, defined as “the normalised strength of the second peak of the autocorrelation trace [McLeod and Wyvill, 2005]”. It is deemed a quite robust measure, and we could most probably implement this with ease and test it.

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/03/20/conversation-with-marije-march-2017/feed/ 0 760
Seminar 16. December http://crossadaptive.hf.ntnu.no/index.php/2017/02/03/seminar-16-december/ http://crossadaptive.hf.ntnu.no/index.php/2017/02/03/seminar-16-december/#comments Fri, 03 Feb 2017 20:41:34 +0000 http://crossadaptive.hf.ntnu.no/?p=682 Continue reading "Seminar 16. December"]]> Philosophical and aesthetical perspectives

–report from meeting 16/12 Trondheim/Skype

Andreas Bergsland, Trond Engum, Tone Åse, Simon Emmerson, Øyvind Brandtsegg, Mats Claesson

The performers experiences of control:

In the last session (Trondheim December session) Tone and Carl Haakon (CH) worked with rhythmic regularity and irregularity as parameters in the analysis. They worked with the same kind of analysis, and the same kind of mapping analysis to effect parameter. After first trying the opposite, they ended up with: Regularity= less effect, Irregularity= more. They also included a sample/hold/freeze effect in one of the exercises. Øyvind commented on how Tone in the video stated that she thought it would be hard to play with so little control, but that she experienced that they worked intuitively with this parameter, which he found was an interesting contradiction. Tone also expressed in the video that on the one side she would sometimes hope for some specific musical choices from CH (“I hope he understands”) but on the other hand that she “enjoyed the surprises”. These observations became a springboard for a conversation about a core issue in the project: the relationship between control and surprise, or between controlling and being controlled. We try to point here to the degree of specific and conscious intentional control, as opposed to “what just happens” due to technological, systemic, or accidental reasons.  The experience from the Trondheim December session was that the musicians preferred what they experienced as an intuitive connection between input and outcome, and that this facilitated the process in the way that they could “act musically”. (This “intuitive connection” is easily related to Simon’s comment about “making ecological sense” later in this discussion.)  Mats commented that in the first Oslo session the performers stated that they felt a similarity to playing with an acoustic instrument. He wondered if this experience had to do with the musicians’ involvement in the system setup, while Trond pointed out that the Trondheim December session and Oslo session were pretty similar in this respect. A further discussion about what “control”, “alienation” and “intuitive playing” can mean in these situations seems appropriate.

Aesthetic and individual variables

This led to a further discussion about how we should be aware that the need for generalising and categorising – which is necessary at some point to actually be able to discuss matters – can lead us to overlook important variable parameters such as:

  • Each performer’s background, skills, working methods, aesthetics and preferences
  • That styles and genres relate differently to this interplay

A good example of this is Maja’s statement in the Brak/Rug session that she preferred the surprising, disturbing effects, which gave her new energy and ideas. Tone noted that this is very easy to understand when you have heard Maja’s music, and even easier if you know her as an artist and person. And it can be looked upon as a contrast to Tone/CH who seek a more “natural” connection between action and sounding result, in principle they want the technology to enhance what they are already doing. But, as cited above, Tone commented that this is not the whole truth. Surprises are also welcome in the Tone/Carl Haakon collaboration.

Simon underlined, because of these variables, the need to pin down in each session what actually happens, and not necessarily set up dialectical pairs. Øyvind pointed out, on the other hand, the need to lay out possible extremes and oppositions to create some dimensions (and terms) along which language can be used to reflect on the matters at hand.

Analysing or experiencing?

Another individual variable, both as audience and performer, is the need to analyse, to understand what is happening in the perceiving of a performance. One example brought up related to this was Andreas’ experience of his change of audience perspective after he studied ear training. This new knowledge led him to take an analysing perspective, wanting to know what happened in a composition when performed. He also said: “as an audience you want to know things, you analyse”. Simon referred to “the inner game of tennis” as another example: how it is possible to stop appreciating playing tennis because you become too occupied analysing the game – thinking of the previous shot rather than clearing the mind ready for the next. Tone pointed at the individual differences between performers, even within the same genre (like harmonic jazz improvisation) – some are very analytic, also in the moment of performing, while others are not. This also goes for the various groups of audiences, some are analytic, some are not – and there is also most likely a continuum between the analytic and the intuitive musician/audience.  Øyvind mentioned experiences from presenting the crossadaptive project to several audiences over the last few months. One of the issues he would usually present is that it can be hard for the audience to follow the crossadaptive transformations, since it is an unfamiliar mode of musical (ex)change.  However, responses to some of the simpler examples he then played (e.g. amplitude controlling reverb size and delay feedback), yielded the response that it was not hard to follow. One of the places where this happened was Santa Barbara, where Curtis Roads commented he thought it quite simple and straightforward to follow. Then again, in the following discussion, Roads also conceded that it was simple because the mapping between analysis parameter and modulated effect was known. Most likely it would be much harder to deduce what the connection was just by listening alone, since the connection (mapping) can be anything. Cross Adaptive processing may be a complicated situation, not easy to analyse either for the audience or the performer. Øyvind pointed towards differences in parameters, as we had also discussed collectively: That some were more “natural” than others, like the balance between amplitude and effect, while some are more abstract, like the balance between noise and tone, or regular/irregular rhythms.

Making ecological sense/playing with expectations

Simon pointed out that we have a long history of connections to sound: some connections are musically intuitive because we have used them perhaps for thousands of years, they make ‘ecological’ sense to us. He referred to Eric Clarke’s “ Ways of listening: An ecological approach to the perception of musical meaning (2005)” and William Gaver “What in the world do we hear?: An ecological approach to auditory event perception” (1993). We come with expectations towards the world, and one way of making art is playing with those expectations. In modernist thinking there is a tendency to think of all musical parameters as equal – or at least equally organised – which may easily undermine their “ecological validity” – although that need not stop the creation of ‘good music’ in creative hands.

Complexity and connections

So, if the need for conscious analysis and understanding will vary between musicians, is this the same for the experienced connection between input and output? And what about the difference between playing and listening as part of the process, or just listening, either as a musician, musicologist, or an audience member? For Tone and Carl Haakon it seemed like a shared experience that playing with regularity/non- regularity felt intuitive for both – while this was actually hard for Øyvind to believe, because he knew the current weakness in how he had implemented the analysis. Parts of the rhythmic analysis methods implemented are very noisy, meaning they produce results that sometimes can have significant (even huge) errors in relation to a human interpretation of the rhythms being analysed. The fact that the musicians still experienced the analyses as responding intuitively is interesting, and it could be connected to something Mats said later on: “the musicians listen in another way, because they have a direct contact with what is really happening”. So, perhaps, while Tone &CH experienced that some things really made musical sense, Øyvind focused on what didn’t work – which would be easier for him to hear?  So how do we understand this, and how is the analysis connected to the sounding result? Andreas pointed out that there is a difference between hearing and analysing: you can learn how the sound behaves and work with that. It might still be difficult to predict exactly what will happen.Tone’s comment here was that you can relate to a certain unpredictability and still have a sort of control over some larger “groups of possible sound results“ that you can relate to as a musician. There is not only an urge to “make sense” (= to understand and “know” the connection) but also an urge to make “aesthetical sense”.

With regards to the experienced complexity, Øyvind also commented that the analysis of a real musical signal is in many ways a gross simplification, and by trying to make sense of the simplification we might actually experience it as more complex. The natural multidimensionality of the experienced sound is lost, due to the singular focus on one extracted feature. We are reinterpreting the sound as something simpler. An example mentioned was the vibrato, which is a complex input and a complex analysis, that could in some analyses be reduced to a simple “more or less” dimension. This issue also relates to the needs of our project to construct new methods of analysis, so that we can try to find analysis dimensions that correspond to some perceptual or experiential features.

Andreas commented “It is quite difficult to really know what is going on without having knowledge of the system and the processes. Even simple mappings can be difficult to grasp only by ear”. Trond reminded us after the meeting about the further complexity that was perhaps not so present in our discussion: we do not improvise with only one parameter “out of control” (adaptive processing). In the cross adaptive situation someone else is processing our own instrument, so we do not have full control over this output, and at the same time we do not have any control over what we are processing, the input (cross- adapting), which in both cases could represent an alienation and perhaps a disconnection from the input-result- relation. And of course the experience of control is also connected to “understanding” the processing analysis you are working with.

The process of interplay:

Øyvind referred to Tone’s experience of a musical “need” during the Trondheim session, expressed: ”I hope he understands…” –  when she talked about the processes in the interplay. This was pointing at how you realise during the interplay that you have very clear musical expectations and wishes towards the other performer.  This is not in principle different from a lot of other musical improvising situations. Still, because you are dependent on the other’s response in a way that is defining not only the wholeness, but your own part in it, this thought seemed to be more present than is usual in this type of interplay.

Tools and setup

Mats commented that very many of the effects that are used are about room size, and that he felt that this had some – to him – unwanted aesthetical consequences. Øyvind responded that he wanted to start with effects that are easy to control and easy to hear the control of. Delay feedback and reverb size are such effects. Mats also suggested that it was an important aesthetical choice not to have effects all the time, and thereby have the possibility to hear the instrument itself. So to what extent should you be able to choose? We discussed the practical possibilities here: some of the musicians (for example Bjørnar Habbestad) have suggested a foot pedal,  where the musician could control the degree to which their actions will inflict changes on the other musician’s sound (or the other way around, control the degree to which other forces can affect his sound). Trond suggested one could also have control over the level of signal/output  for the effects, adjusting the balance between processed and unprocessed sound. As Øyvind commented, these types of control could be a pedagogical tool for rehearsing with the effect, turning the processing on and off to understand the mapping better. The tools are of course partly defining the musician’s balance between control, predictability and alienation. Connected to this, we had a short discussion regarding amplified sound in general, that the instrumental sound coming from a speaker located elsewhere in the room in itself could already represent an alienation. Simon referred to the Lawrence Casserley /Evan Parker principle of “each performer’s own processor”, and the situation before the age of the big PA, where the electronic sound could be localised to each musician’s individual output. We discussed possibilities and difficulties with this in a cross adaptive setting: which signal should come out of your speaker? The processed sound of the other, or the result of the other processing you? Or both? and then what would the function be – the placement of the sound is already disturbed.

Rhythm

New in this session was the use of the rhythmical analysis. This is very different from all other parameters we have implemented so far. Other analyses relate to the immediate sonic character, but rhythmic analysis tries to extract some  temporal features, patterns and behaviours. Since much of the music played in this project is not based on a steady pulse, and even less confined to a regular grid (meter), the traditional methods of rhythmic analysis will not be appropriate. Traditionally one will find the basic pulse, then deduce some form of meter based on the activity, and after this is done one can relate further activity to this pulse and meter. In our rhythmical analysis methods we have tried to avoid the need to first determine pulse and meter, but rather looked into the immediate time relationships between neighbouring events. This creates much less support for any hypothesis the analyser might have about the rhythmical activity, but also allows much greater freedom of variation (stylistically, musically) in the input. Øyvind is really not satisfied with the current status of the rhythmic analysis (even if he is the one mainly responsible for the design), but he was eager to hear how it worked when used by Tone and Carl Haakon.  It seems that the live use by real musicians allowed the weaknesses of the analyses to be somewhat covered up. The musicians reported that they felt the system responded quite well (and predictably) to their playing. This indicates that, even if refinements are much needed, the current approach is probably a useful one. One thing that we can say for sure is that some sort of rhythmical analysis is an interesting area of further exploration, and that it can encode some perceptual and experiential features of the musical signal in ways that make sense to the performers. And if it makes sense to the performers, we might guess that it will have the possibility of making sense to the listener as well.

Andreas: How do you define regularity (ex. claves-based musics), how “less regular” is that from a steady beat.

Simon: If you ask a difficult question with a range of possible answers this will be difficult to implement within the project.

As a follow up to the refinement of rhythmic analysis, Øyvind asked:  how would *you* analyze rhythm?

Simon: I wouldn’t analyze rhythm for example timeline in African music: a guiding pulse that is not necessarily performed and may exist only in the performer’s head. (This relates directly to Andreas’s next point – Simon later withdrew the idea that he would not analyse rhythm and acknowledged its usefulness in performance practice.)

Andreas: Rhythm is a very complex phenomenon, which involves multiple interconnected temporal levels, often hierarchically organised. Perceptually, we have many ongoing processes involving present, past and anticipations about future events. It might be difficult to emulate such processes in software analysis. Perhaps pattern recognition algorithms can be good for analysing rhythmical features?

Mats: what is rhythm? In our examples: gesture may be more useful than rhythm

Øyvind: rhythm is repeatability, perhaps? Maybe we interpret this in the second after

Simon: no I think we interpret virtually at the same time

Tone: I think of it as a bodily experience first and foremost.  (Thinking about this in retrospect, Tone adds: The impulses -when they are experienced as related to each other- creates a movement in the body. I register that through the years (working with non-metric rhythms in free improvisation) there is less need for a periodical set of impulses to start this movement. (But when I look at babies and toddlers, I recognise this bodily reaction to irregular impulses.) I recognice what Andreas says- the movement is trigged by the expectation of more to come. (Think about the difference in your body when you wait for the next impulse to come (anticipations) and when you know that it is over….)

Andreas: When you listen to rhythms, you group events on different levels and relate to what was already played. The grouping is often referred to as «chunking» (psychology). Thus, it works both on an immediate level (now) as well as a more overarching level (bar, subsection, section) because we have to relate what we hear to the earlier. You can simplify or make it complex

]]>
http://crossadaptive.hf.ntnu.no/index.php/2017/02/03/seminar-16-december/feed/ 1 682
Oslo, First Session, October 18, 2016 http://crossadaptive.hf.ntnu.no/index.php/2016/12/12/oslo-first-session-october-18-2016/ http://crossadaptive.hf.ntnu.no/index.php/2016/12/12/oslo-first-session-october-18-2016/#comments Mon, 12 Dec 2016 20:01:18 +0000 http://crossadaptive.hf.ntnu.no/?p=599 Continue reading "Oslo, First Session, October 18, 2016"]]> First Oslo Session. Documentation of process
18.11.2016

Participants
Gyrid Kaldestad, vocal
Bernt Isak Wærstad, guitar
Bjørnar Habbestad, flute

Observer and Video
Mats Claesson

The Session took place in one of the sound studios at the Norwegian Academy of Music, Oslo , Norway

Gyrid Kaldestad (vocal) and Bernt Isak Wærstad (guitar) had one technical/setup meeting beforehand, and there were numerous emails going back and forth before the session that was about technical issues.
Bjørnar Habbestad (flute) where invited into the session.

The observer, decided to make a video documentation of the session.
I’m glad I did because I think it gives a good insight off the process. And a process it was!
The whole session lasted almost 8 hours and it was not until the very last 30 minutes that playing started.

I am (Mats Claesson) not going to comment on the performative musical side of the session. The only reason for this is that the music making happend at the very end of the session, was very short and it was not recorded so I could evaluate it “in depth” However, just watch the comments, from the participants, at the end of the video. They are very positive…..
I think from the musicians side it was rewarding and highly interesting. I am confident that the next session will generate an musical outcome that is substantial enough to be comment on, from both a performative and a musical side.

In the video there are no processed sound of the very last playing due to use of headphone, but you can listen to excerpts posted below the video.

Here is a link to the video

Reflections on the process given from the perspective of the musicians:

We agreed to make a limited setup to have better control over the processing. Starting with basic sounds and basic processing tools so that we easier could control the system in a musical way. We started with a tuning analysis for each instrument (voice, flute, guitar)

Instead of chosing analysis parameter up front, we analysed different playing techniques, e.g. non- tonal sounds (sss, shhh), multiphonics etc., and saw how the analyser responded. We also recorded short samples of the different techniques that each of us usually play, so that we could investigate the analysis several times.  

This is the analysis results we got:

analysis

Since we’re all musicians experienced with live processing, we made a setup based on effects that we already know well and use in our live-electronic setup (reverb, filter, compression, ring modulation and distortion).

To set up meaningful mappings, we chose an approach that we entitled “spectral ducking”, where a certain musical feature on one instrument would reduce the same musical feature on the other – e.g. a sustained tonal sound produced by the vocalist, would reduce harmonic musical features of the flute by applying ring modulation. Here is a complete list of the mappings used:

mapping

Excerpt #1 – Vocal and flute

Excerpt #2 – Vocal and flute

Excerpt #3 – Vocal and flute

Excerpt #4 – Vocal and flute

Lack of consisive and presise analysis results from the guitar in combination with time limitation, it wasn’t possible to set up mappings for the guitar and flute. We did however test out the guitar and flute in the last minutes of the session, where the guitar simply took the role of the vocal in terms of processing and mapping. A knowledge of the vocal analysis and mapping, made it possible to perform with the same setup even though the input instrument had changed. Some short excerpts from this performance can be heard below.

Excerpt #5 – Guitar and flute

Excerpt #6 – Guitar and flute

Excerpt #7 – Guitar and flute

 Reflections and comments:

  • We experienced the importance of exploring new tools like this on a known system. Since none of us knew Reaper from before, we used spent quite a lot of time learning a new system (both while preparing and during the session)
  • Could the meters analyser be turned the other way around? It is a bit difficult to read sideways.
  • It would be nice to be able to record and export control data from the analyser tool that will make it possible to use it later in a synthesis.
  • Could it be an idea to have more analyzer sources pr channel? The Keith McMillian Softstep mapping software could possibly be something to look at for inspiration?
  • The output is surprisingly musical – maybe this is a result of all the discussions and reflections we did before we did the setup and before we played?
  • The outcome is something else than playing with live electronics- it is immediate and you can actually focus on the listening – very liberating from a live electronics point of view!
  • The system is merging the different sounds in a very elegant way.
  • Knowing that you have an influence on your fellow musicians output forces you to think in new ways when working with live electronics.
  • The experience for us is that this is similar to work acoustically.

 

]]>
http://crossadaptive.hf.ntnu.no/index.php/2016/12/12/oslo-first-session-october-18-2016/feed/ 1 599
Seminar 21. October http://crossadaptive.hf.ntnu.no/index.php/2016/10/31/seminar-21-october/ http://crossadaptive.hf.ntnu.no/index.php/2016/10/31/seminar-21-october/#respond Mon, 31 Oct 2016 04:06:31 +0000 http://crossadaptive.hf.ntnu.no/?p=563 Continue reading "Seminar 21. October"]]> We were a combination of physically present and online contributors to the seminar.  Joshua Reiss and Victor Lazzarini participated via online connection, present together in Trondheim were: Maja S.K. Ratkje, Siv Øyunn Kjenstad, Andreas Bergsand, Trond Engum, Sigurd Saue and Øyvind Brandtsegg

Instrumental extension, monitoring, bleed

We started the seminar by hearing from the musicians how it felt to perform during Wednesday’s session. Generally, Siv and Maja expressed that the analysis and modulation felt like an extension to their original instrument. There were issues raised about the listening conditions, how it can be difficult to balance the treatments with the acoustic sound. One additional issue in this respect when working cross-adaptively (as compared to e.g. the live processing setting), is that there is not a musician controlling the processing, to the processed sound is perhaps a little bit more “wild”. In a live processing setting, the musician controlling the processing will attempt to ensure a musically shaped phrasing and control that is at the current stage not present in the crossadaptive situation. Maja also reported acoustic bleed from the headphones to the feature extraction for her sound. With this kind of sensitivity to crosstalk, the need for source separation (as discussed earlier) is again put to our attention.  Adjusting the signal gating (noise floor threshold for the analyzer) is not sufficient in many situations, and raising the threshold also lowers the analyzer sensitivity to soft playing. Some analysis methods are more vulnerable than others, but it is safe to say that none of the analysis methods are really robust against noise or bleed from external signals.

Interaction scenarios as “assignments” for the performers

We discussed the different types of mapping (of features to modulators) which the musicians also called “assignments”, as it was experienced as a given task to perform utilizing certain expressive dimensions in specific ways to control the timbre of the other instrument. This is of course true. Maja expressed that she was most intrigued by the mappings that felt “illogical”, and that illogical was good. By illogical, she means mappings that does not feel natural or as intuitive musical energy flows. Things that break up the regular musical attention, and break up the regular flow from  idea to expression. As an example was mentioned the use of pitch to control reverberation size. For Maja (for many, perhaps for most musicians), pitch is such a prominent parameter in the shaping of a musical statement, so it is hard to play when some external process interferes with the intentional use of pitch. The linking of pitch to some timbral control is such an interference, because it creates potential conflict between the musically intended pitch trajectory and the musically intended timbral control trajectory. An interaction scenario (or in effect, a mapping from features to modulators to effects), can in some respects be viewed as a composition, in that it sets a direction for the performance. In many ways similar to the text scores of the experimental music of the 60’s, where the actual actions or events unfolding are perhaps not described, but more a description of an idea of how the musicians may approach the piece. For some, this may be termed a composition, others might use the term score. In any case it dictates or suggests some aspects of what the performers might do, and as such consists of an external  implication on the performance.

Analysis, feature extraction

Some of our analysis methods are still a bit flaky, i.e. we see spurious outliers in their output that is not necessarily caused by perceptible changes in the signal being analyzed. One example of this is the rhythm consonance feature, where we try to extract a measure of rhythmic complexity by measuring neighboring delta times between events and looking for simple ratios between these. The general idea being that simpler the ratio, the simpler the rhythm is. The errors sneak in as part of the tolerance for human deviation in rhythmic performance, where one may clearly identify one intended pattern, while the actual measured delta times can deviate more than a musical time division (for example, when playing a jazzy “swing 8ths” triplet pattern, which may be performed somewhere between equal 8th notes, a triplet pattern, or even towards a dotted 8th plus a 16th and in special cases a double dotted 8th plus a 32nd). When looking for simple integer relationships small deviations in phrasing may lead to large changes in the algorithm’s output. For example 1:1 for a straight repeating 8th pattern, 2:1 for a triplet pattern and 3:1 for a dotted 8th plus 16th pattern, re the jazz swing 8ths. See also this earlier post for more details if needed.  As an extreme example, think of a whole note at a slow tempo (1:1) versus an accent on the last 16th note on a measure (giving a 15:16 ratio). These deviations create single values with a high error. Some common ways of dampening the effect of such errors would be lowpass filtering, exponential moving average, or median filtering. One problem in the case of rhythmic analysis is that we only get new values for each new event, so the “sampling rate” is variable and also very low. This means that any filtering has the consequence of making the feature extraction respond very slowly (and also with a latency that varies with the number of events played), something that we would ideally try to avoid.

Stabilizing extraction methods

In the light of the above, we discussed possible methods for stabilizing the feature extraction methods to avoid the spurious large errors. One conceptual difficulty is differentiating between a mistake and an intended expressive deviation.  More importantly, to differentiate between different possible intended phrasings. How do we know what the intended phrasing is without embedding too many assumptions in our analysis algorithm? For rhythm, it seems we could do much better if we first deduct the pulse and the meter, but we need to determine what our (performed) phrasings are to be sure of the pulse and meter, so it turns into a chicken and egg problem. Some pulse and meter detection algorithms maintain several potential alternatives, giving each alternative a score for how well it fits in light of the observed data. This is a good approach, assuming we want to find the pulse and meter. Much of the music we will deal with in this project does not have a strong regular pulse, and it most surely does not have a stable meter. Let’s put the nitty gritty details of this aside for a moment, and just assume that we need some kind of stabilizing mechanism. As Josh put it, a restricted form of the algorithm.
Disregarding the actual technical difficulties, let’s say we want some method of learning what the musician might be up to, what is intended, what is mistake, and what are expressive deviations from the norm. Some  sort of calibration the normal, average, or regularly occurring behavior. Track change during session, and this is change relative to the norm as established in the traning. Now, assuming we could actually build such an algorithm, when should it be calibrated (put into learn mode)? Should we continuously update the norm, or just train once and for all? If training before performance (as some sort of sound check), we might fail miserably because the musician might do wildly different things in the “real” musical situation compared to when “just testing”. Also, if we continuously update the norm, then our measures are always drifting, so something that was measured as “soft” in the beginning of the performance might be indicated as something else entirely by the end of the performance. Even though we listen to musical form as a relative change, we might also as listeners recognize when “the same” happens again later. E.g. activity in the same pitch register, the same kind of rhythmic density, the same kind of staccato abrupt complex rhythms etc. with a continuously updated norm, we might classify the same as something different. Regarding the attempt to define something as the same see also the earlier post on philosophical implications. Still, with all these reservations, it seems necessary to attempt creating methods for relative change. This can perhaps be used as a restricted form, as Joshua suggests, or in any case as an interesting variation on the extracted features we already have.  It would extend the feature output formats of absolute value, dynamic range, crest. In some ways it is related to dynamic range (f.ex. as the relative change would to some degree be high when the dynamic range is high, but then again the relative change would have a more slowly moving reference, and it would also be able to go negative). As a reference for the relative change, we could use a long time average, a model of expectation, assumption of the current estimate (maintaining several candidates as with pulse and meter induction), or normal distribution and standard deviation. These long term estimates have been used with success in A-DAFx (adaptive audio effects) for live applications.

Josh mentioned the possibility of doing A/B comparision of the extraction methods with some tests at QMUL. We’ll discuss this further.

Display of extracted features

When learning (an instrument), multimodal feedback can significantly reduce the time taken to gain proficiency. When learning how the different feature extraction methods work, and how they react to intentional expressive changes in the signal, visual feedback might be useful. Other means of feedback could be sonifying the analysis (which is what we do already, but perhaps make it more pointed and simple. This could be especially useful when working with the gate mix method, as the gate will give not indication to the performer that it will soon open, whereas a sonification of the signal could aid the performer in learning the range of the feature and getting an intuitive feel for when the gate will open. Yet another means of feedback is giving the performer the ability to adjust the scaling of the feature-to-modulator mapping. In this way, it would act somewhat like giving the performer that ability to tune the instrument, ensuring that it reacts dynamically to the ranges of expression utilized by this performer. Though not strictly a feedback technique, we could still treat it as a kind of familiarization aid in that it acts as a two-way process between performer and instrument. The visualization and the performer adjustable controls could be implemented as a small GUI component running on portable devices like cellphone or touchpad. The signals can be communicated from the analyzer and MIDIator via OSC, and the selection of features to display can be controlled by the assignments of features in the MIDIator. A minimal set of controls can be exposed, and the effects of these being mirrored in the plugins (MIDIator). Some musicians may prefer not to have such a visual feedback. Siv voiced concern that she would not be so interested in focusing visually on the signals. This is a very valid concern for performance. Let us assume it will not be used during actual performance, but as a tool for familiarization during early stages of practice with the instrument.

]]>
http://crossadaptive.hf.ntnu.no/index.php/2016/10/31/seminar-21-october/feed/ 0 563
Brief system overview and evaluation http://crossadaptive.hf.ntnu.no/index.php/2016/10/07/brief-system-overview-and-evaluation/ http://crossadaptive.hf.ntnu.no/index.php/2016/10/07/brief-system-overview-and-evaluation/#comments Fri, 07 Oct 2016 05:29:17 +0000 http://crossadaptive.hf.ntnu.no/?p=505 Continue reading "Brief system overview and evaluation"]]> As preparation for upcoming discussions about tecnical needs in the project, it seems appropriate to briefly describe the current status of the software developed so far.

analyzer_2016_10
The Analyzer

The plugins

The two main plugins developed is the Analyzer  and the MIDIator. The Analyzer extracts perceptual features from a live audio signal and transmit signals representing these features over a network protocol (OSC) to the MIDIator. The job of the MIDIator is to combine different analyzed features (scaling, shaping, mixing, gating) into a controller signal that we will ultimately use to control some effect parameter. The MIDIator can run on a different track in the same DAW, it can run on another DAW, or on another computer entirely.

Strong points

The feature extraction generally works reasonably well for the signals it has been tested on. Since a limited set of signals is readily available during implementation, some overfitting to these signals can be expected. Still, a large set of features is extracted, and these have been selected and tweaked for use as intentional musical controllers. This can sometimes differ from the more pure mathematical and analytical descriptions of a signal. The quality of of our feature extraction can best be measured in how well a musician can utilize it to intentionallly control the output. No quantitative mesurement of that sort have been done so far. The MIDIator contains a selection of methods to shape and filter the signals, and to combine them in different ways. Until recently, the only way to combine signals (features) was by adding them together. As of the past two weeks, mix methods for absolute difference, gating, and sample/hold has been added.

midiator_modules_2016_10
MIDIator modules

Weak points

The signal chain transmission from Analyzer to MIDIator, and then again from the MIDIator to the control signal destination each incurs at least one sample block latency. The size of a sample block can vary from system to system, but regardless of the size used our system will have 3 times this latency before an effect parameter value changes in response to a change in the audio input. For many types of parameter changes this is not critical, still it is a notable limitation of the system.

The signal transmission latency points at another general problem, interfacing between technologies. Each time we transfer signals from one paradigm to another we have the potential for degraded performance, less stability and/or added latency. In our system the interface from the DAW to our plugins will incur a sample block of latency, the interface between Csound and Python can sometimes incure performance penalties if large chunks of data needs to be transmitted from one to the other. Likewise, the communication between the Analyzer and MIDIator is such an interface.

Some (many) of the feature extraction methods create somewhat noisy signals. With noise, we mean here that the analyzer output can intermittently deviate from the value we perceptually assume to be “correct”. We can also look at this deviation statistically, if we feed it relatively (perceptually) consistent signals and look at how stable the output of each feature extraction method is. Many of the features show activity generally in the right register, and a statistical average of the output corresponds with general perceptual features. While the average values are good, we will oftentimes see spurious values with relatively high deviation from the general trend. From this, we can assume that the feature extraction model generally works, but intermittently fails. Sometimes, filtering is used as an inherent part of the analysis method, and in all cases, the MIDIator has a moving exponential average filter with separate rise and fall times. Filtering can be used to cover up the problem, but better analysis methods would give us more precise and faster response from the system.

Audio separation between instruments can sometimes be poor. In the studio, we can isolate each musician, but if we want them to be able to play together naturally in the same room, a significant bleed from one instrument to the other will occur. For live performance this situation is obviously even worse. The bleed give rise to two kinds of problems: Signal analysis is disturbed by the signal bleed, and signal processing is cluttered. For the analysis, it does not matter if we had perfect analysis methods if the signal to be analyzed is a messy combination of opposing perceptual dimensions. For the effect processing, controlling an effect parameter for one instrument leads to a change in the processing of the other instrument, just because the other instruments’ sound bleed into the first instrument’s microphones

Useful parameters (features extracted)

In many of the sessions up until now, the most used features has been amplitude (rms) and transient density. One reson for this is probably that they are concptually easy to understand, another is that their output is relatively stable and predictable in relation to the perceptual quality of the sound analyzed. Here are some suggestions of other parameters that expectedly can be utilized effectively in the current implementation:

  • envelope crest (env_crest): the peakyness of the amplitude envelope, for sustained sounds this will be low, for percussive onsets with silence between evens it will be high
  • envelope dynamic range (env_dyn): goes low for signals operating at a stable dynamic level, high for signals with a high degree of dynamic variation.
  • pitch: well known
  • spectral crest (s_crest): goes low for tonal sounds, medium for pressed tones, high for noisy sounds.
  • spectral flux (s_flux): goes high for noisy sounds, low for tonal sounds
  • mfccdiff: measure of tension or pressedness, described here

There is also another group of extracted features that is potentially useful but still has some stability issues

  • rhythmic consonance (rhythm_cons) and rhythmic irregularity (rhythm_irreg): described here
  • rhythm autocorr crest (ra_crest) and rhythm autocorr flux (ra_flux): described here

The rest of the extracted features can be considered more experimental, in some cases they might yield effective controllers, especially when combined with other features in reasonable proportions

]]>
http://crossadaptive.hf.ntnu.no/index.php/2016/10/07/brief-system-overview-and-evaluation/feed/ 1 505