-
manuels
@porres said:
@manuels said:
Sorry, Im not good at explaining ...
well please help me understand how you are doing interpolation with such graphs, and what are "basis functions" or "kernel" supposed to mean in this context... or please give me some references to check. Your patch just gave me a new dimension to look at interpolation and I wanna get it
Maybe this is the missing piece of the puzzle? ... Interpolation is just a special type of convolution
The term "basis functions" (that I probably used incorrectly) doesn't matter, and by kernel I was just refering to the function (whether piecewise polynomial or continuous) the input signal is convolved with.
The difference between my examples and some of the others you correctly spotted is also mentioned in the linked resource in the section "Smoothed quadratic". One advantage of a (non-interpolating) smoothing function is that no overshoot is produced. But of course, if you need actual interpolation you have to use different functions.
Another related topic is resampling. This thread may also be helpful: Digital Audio Demystified
-
manuels
@porres Sorry, Im not good at explaining ...
There is nothing special about the shift register in those patches, and you are right: Doing linear interpolation with six points is of course a waste since only two points are used. I guess I did it that way because I wanted to switch between different interpolation/smoothing functions in order to compare the results. Only in the case of the Gaussian kernel the precalculation might actually make sense. Maybe not even then. -
manuels
@ddw_music Just to throw in another way of doing the same thing …
Shift register with precalculated basis functions (or rather: basis functions read from a precalculated kernel): interpolated-noise.pd
I used this for an interpolated version of the Gendyn stochastic synthesis algorithm, which can be used as a simple noise generator as well … gendyn-interp.pd
-
manuels
@spoidy23 You mean something like a variable kernel density estimation (with adaptive bandwidth)? I guess, that would be difficult ...
-
manuels
@spoidy23 Interesting ... I didn't know about KDE, but it seems to be more or less the same thing. The iterative approach I proposed could be seen as a way to find the "right" bandwidth (whatever that is). To illustrate this, I added a plot for the corresponding kernel, which is just the impulse response of the filter: filtered-histogram-kernel.pd
-
manuels
@jameslo said:
It even looks like what I was proposing, a moving RMS window, might be another instance of a zero-phase lowpass filter
Yes, indeed! Or to be exact, that's only true as long as there are data points available for the whole RMS window. At the beginning and end of your dataset, you still have a phase shift (of half the effective window size). But that's always a problem, and I don't have a solution for that ...
BTW you could also try using a weighted average. I did that indirectly by repeating the process many times (which has the effect of approaching a Gaussian filter).
-
manuels
@jameslo What's wrong with low-pass filtering? Not being a data-scientist I would have thought that this is what they are doing all the time.
But yes, of course, there's a problem with low-pass filtering that you have to be aware of (and that you are maybe refering to): If the output of your filter only depends on the current input and previous input or output values, then you will always have some phase shift, which is certainly not what you want when dealing with a histogram! So you have to use a zero-phase filter.
Here's how I would do it ... filtered-histogram.pd
-
manuels
@Element-S I can't help you with the Pultec EQP-1 implementation, but since you're talking about being late to the party it might be worth pointing out that at least some of [bob~]'s weaknesses, mentioned in this thread, may have been the result of a bug, that was fixed around three years ago. If I remember correctly, the output wasn't taken after the fourth stage of the filter but before the first (= input + feedback). So no surprise, that this was considered a bad implementation of the Moog ladder filter.
Edit: Had to look it up ... It was actually a bit different from what I remembered: The incorrect output was the first instead of the fourth state variable. So the output was taken AFTER the first stage.
-
manuels
@trummerschlunk Here's my implementation of the 2nd (more accurate) version: dynamic-smoothing.pd
Not tested yet, so I'm not sure that it actually works ... -
manuels
@trummerschlunk I made a little test patch for the different options that you described, hope it's helpful in some way ... crossover-filter-test.pd
The version with shelving filters shouldn't be more cpu-expensive (it might even be cheaper), and the math for the gains is quite simple. Or am I missing something?
But I don't think any of these approaches is suitable for as much as 32 bands! Wouldn't you need much steeper filter slopes for that? Of course, you could use higher order Butterworth shelving filters, but that's gonna be really expensive! So maybe, as @jameslo suggested, you should go for frequency domain techniques in that case.