The convolution techniques we’ve developed in this project are aimed at live performance and are so far mostly used for that purpose. The manner in which the filters can be continuously updated might also allow for other creative applications within convolution and filtering. As a very basic demo of how the filters sound with prerecorded sounds, we’ve done two audio examples. Do note that the updating of the filters are still realtime, it is just the source sounds playing into the filters that are prerecorded.
Source
The source sounds are prepared as a stereo file, with sound A in the left channel and sound B in the right channel. Sound A is assembled from sounds downloaded from freesound.org , all made by user Corsica_S (files “alpha.wav”, “bravo.wav”, “charlie.wav”, “delta.wav”). Sound B is a drumloop programmed by Oeyvind Brandtsegg some time in the late 90’s.
Source sounds for the convolution demo.
Liveconv
Sound A is recorded as the impulse response for liveconv. The length of the filter is approx 0.5 seconds (the duration of the spoken word used as source). It is replaced approx every 5.5 seconds, manually automated to line up with when the appropriate word appears in the source. This way, the first IR contains the word “alpha”, then it is replaced after 5.5 seconds with the word “bravo”, and so on until all 4 words have been used as an IR.
Liveconv demo using above source sounds.
tvconv
Sound A and sound B runs continuously through the filter. No freezing of coefficients is applied. The filter length is 23768 samples, approximately 0.74 seconds at a sample rate of 44.1 kHz.
tvconv demo using the above source sounds.
I greatly enjoyed the “Live Convolution with Time-Varying Filters” article. I’m impressed with how you’re using the algorithms!
For the dynamic range issue, have you considered taking the square root of each of the spectral magnitudes, or something like that?
Hi Earl,
We did talk about this earlier, but did not get around to doing it yet. It is a good idea, and will probably make things better. Still, the huge variations in output amplitude is very dependent on the amount of spectral overlap between the two signals, and we are looking into how to adjust for that automatically as well.