• manuels

    @jameslo said:

    Can you show me a patch that, say, generates all 12 equally tempered tones starting with middle C from 1 phasor?

    You could do something like this ... phasor-sync.pd

    Note sure if that's anywhere close to what's going on inside Max's [rate~], though.

    posted in technical issues read more
  • manuels

    @lacuna Just because anyone else has replied yet ...

    I don't really understand the code of IEMlib's [filter~] but I'm pretty sure that it is indeed a biquad filter, that is: if the filter (defined by the first argument) is second order. First order filters use a similar but cheaper algorithm. Higher order filters are implemented as a cascade of first and second order sections depending on whether the filter order is even or uneven.

    To get the biquad coefficients right, it's important to notice the differences in implementation. (There has been confusion in the past because Max and Pd use different implementations.) Both Pd's [biquad~] and IEMlib's [filter~] seem to use Direct-Form II, but I'm not sure about that.

    posted in technical issues read more
  • manuels

    @ddw_music Maybe [player~] from the ELSE library could be an option?

    posted in technical issues read more
  • manuels

    @beep.beep Instead of using a send symbol for the routing to the two outlets, couldn't you just use the plain old [route] object with a preceding [list prepend]?

    posted in technical issues read more
  • manuels

    @ddw_music said:

    Wouldn't surprise me at all if lop2~'s behavior is identical to RLPF's.

    But ELSE's [lop2~] isn't even a resonant filter, just a first order low-pass filter! I think it's just named "lop2~" because "lop~" is already used in Pd Vanilla, kinda misleading though.

    The resonant low-pass filter in ELSE is called [lowpass~]. Checking its help file, I noticed a somewhat strange preset range for the resonance/Q parameter. (Maybe that's also an issue in SC's RLPF?) Values smaller than 0.5 don't seem all too reasonable to me because the filter is then "overdamped" and the actual cutoff frequency drops far below the specified frequency. In some filters Q = 0.5 is therefore simply defined as zero resonance, which probably adds to the confusion ...

    Edit: It's even worse than that, because there are also two different definitions of the Q factor. (Here, I was refering to the "physical" rather than the bandwidth definition.)

    Another edit: In the ELSE tutorial only the bandwidth definition of Q is given (or maybe I just overlooked something?). But in Robert Bristow-Johnson's Audio EQ Cookbook, which is the source for some of ELSE's filters, the physical / electrical engineering definition is used. So there might really be a bit of confusion. Or is it just me confusing things here?

    posted in technical issues read more
  • manuels

    Here's an updated version of the pole-zero diagram I posted two years ago:

    pole-zero-diagram-2.0.pd

    Unfortunately, I didn't know about ELSE back then. As you probably know, it has frequency response and pole-zero plots as well, and there's also a section about the Z-plane in the wonderful live-electronics tutorial. In fact, these are the main reasons, why I decided to update my patch.

    So the new version of my pole-zero diagram has two additional features: First, you can now switch between linear and logarithmic scaling of both amplitude and frequency, and second, I added some examples of standard filter types, so that you can also see the effect of changing a filter's parameters on the positions of its poles and zeros.

    I hope this may be helpful for some of you. If something's wrong with the patch or the filter examples, please tell me!

    posted in patch~ read more
  • manuels

    @ddw_music Thanks for the link! Should probably read the SC forum more often ... This is indeed an exact explanation for what's going on in my version of the filter: As I said, I just used the Butterworth calculation of the filter coefficients, and therefore linear frequency in the S-plane is mapped into the Z-plane using the bilinear transform and frequency warping. Remains the question: What's the disadvantage of this mapping? There must be one, I guess ...

    The main reason for asking this question was this: I'm going to post a little update to my pole-zero diagram with some examples, which I all implemented using BLT, and I'm just not sure whether that's a good way to do this.

    posted in technical issues read more
  • manuels

    Haha, yes, it takes some time to get into that. And after a few years it's still like if you're just scratching the surface ...

    posted in technical issues read more
  • manuels

    Still don't fully understand ... Maybe it's just an approximation for computational reasons?

    If I plot the feedback coefficient of both versions of the filter, it looks like this:

    lop2-coefficient.png

    So for frequencies up to about 1500 Hz the approximation works pretty well, for higher frequencies there's an increasing error, and for frequencies above 2 radians the filter has no action at all (as it says in the help file).

    Or maybe I'm completely missing the point ...

    Edit: Actually the help file for ELSE's [lop2~] only says that the filter has no action "as you reach nyquist", which is of course true for the filter with the exact coefficient as well.

    posted in technical issues read more
  • manuels

    I stumbled on this while trying to finally understand the Butterworth filter in the Pd audio examples (see "H13.butterworth.pd"). Studying the calculation of the filter coefficients I was wondering how the locations of the real pole and zero were different from those of the 1st order filter. So I compared it with a one-pole, one-zero lowpass filter, ELSE's [lop2~].

    Well, yes, there are differences ... But here's the strange thing: My filter with the coefficients calculated exactly like in the Butterworth filter behaves as I would have expected, whereas ELSE's [lop2~] doesn't work too well in the high frequency range. This is even stated in the help file.

    Here's how I tested the filter: lop2-test.pd

    Can anyone explain this? Am I missing something?

    posted in technical issues read more
  • manuels

    @jameslo said:

    @manuels So judging from your patch, it's not essential to have all waveshaped noise output samples be either 0 or positive?

    It's certainly not essential, maybe not even preferable, but I'm not sure about that. Wouldn't a purely positive valued signal produce DC if there was some in the signal it's getting convolved with?

    posted in technical issues read more
  • manuels

    @jameslo Oh, [expr~] obviously can't handle this kind of pow notation ... So what I suggested (pretty arbitrarily, to be honest) was just a skinny Gaussian curve. Try exp(-1000*$v1*$v1) instead.

    The intuition behind my choice was quite simple: First off, I thought it might be useful to reduce the density of the echoes of the input sample. In fact, convolution does produce "echoes" at every sample, although most of them won't be heard as echoes but as filtering effects. Furthermore, the exponential function has the useful property to be bounded between 0 and 1 for negative input values, and it seemed to be a natural choice, since it corresponds to our perception of amplitude.

    Trying to generalize on how to use a transfer/shaping function in this specific case: The input (noise) is equally distributed, so the distribution of the output will be exactly the same as the distribution of the shaping function itself. If I correctly understand the concept of white noise, any shaping function will also produce white noise as its output. The "color" of noise is only affected, if the operation introduces some kind of correlation between successive samples. That's what all digital filters do, of course. So now you may ask: Then why does waveshaping in its conventional use affect the timbre of a sound? Well, usually successive samples of an input signal aren't uncorrelated, right? A transfer/shaping function then will have some impact on the existing autocorrelation of the sound. So the case of applying such a function to white noise is really quite specific.

    BTW: Thinking about distributions of sample values also helps to understand the RMS amplitude of differently distributed white noise signals: Equally distributed noise has the same power as triangle and sawtooth waves, whereas binary noise has the greatest possible power, equal to square waves etc.

    Hopefully, this does help in some way ... Apart from that, I would recommend to implement the transfer function as a lookup table and draw different curves. I think, this might be the best way to get an intuition of how a waveshaping transfer function works.

    Edit: Drawing the histogram instead of the transfer function may be even more intuitive, especially if you already know how to use [array random]. So here's basically an audio rate version of that ... array_random~.pd

    posted in technical issues read more
  • manuels

    @jameslo Thanks for the audio examples! Sounds a bit like dense granular synthesis to me.

    If you want to follow the FFT / lowpass filtering approach: You could simply rebuild the filter so that it operates between successive FFT frames like this: ugly-noise.pd
    The "sample rate" of this lowpass filter is then of course SR/hopsize, which is why the lop frequency can only reach up to about 120 Hz.

    I'm still wondering if you can't get similar results with time domain manipulation of white noise, for example if you chose a rather extreme shaping function like say exp(-1000*x^2).

    posted in technical issues read more
  • manuels

    Just a little warning: In subpatches with overlapping windows [noise~] doesn't output new values for each window but only for each block. (At least I hear artifacts that led me to this conclusion.) It seems like it can be fixed with [bang~] --> [random] --> [seed( --> [noise~]

    posted in technical issues read more
  • manuels

    @jameslo Did you try using different distributions of white noise like Gaussian white noise, binary white noise etc.? There's also a thing called velvet noise, which is frequently used to model impulse responses. It's pretty simple to generate various different distributions of white noise via waveshaping. For Gaussian white noise you could alternatively average the outputs of several independant noise generators (you're probably familiar with the so-called "central limit theorem").

    posted in technical issues read more
  • manuels

    @tildebrow Don't use my poor attempt on recursive counting in practice, the solution of @seb-harmonik.ar is much better! I'm actually quite impressed by it ... That's probably as elegant and efficient as you can get with this in Pd.

    posted in technical issues read more
  • manuels

    @tildebrow To be honest, I've never heard of recursive counting before, so I can only conclude from your examples how it should work. This is my attempt:

    recursive-count.pd

    The patch doesn't give any output for dimension 1, but I assume you won't need that anyway.

    posted in technical issues read more
  • manuels

    @oid Not sure where the "character" in your filter with extra delay comes from ... It's probably some kind of distortion. To recreate this effect in a more controllable way you could set the resonance frequency to some percentage of the cut-off frequency, like this:

    resonance-fix-extended.pd

    For values smaller than 100 the resonance is somehow distorted. Try feeding the filter with impulses at high resonance to hear the effect more clearly.

    I removed the arctangent, so there isn't any nonlinearity in the feedback loop, just the [clip~ -1 1].

    Edit: Would it be more reasonable to set the cut-off frequency to a percentage of the resonant frequency instead? I'm not sure ...

    posted in technical issues read more
  • manuels

    Here's another way to fix the resonance frequency of the 4-pole filter:

    resonance-fix-rzero.pd

    In my previous attempt I calculated the required delay for the 180 degree phase shift, and it turned out to be around 3 samples for a wide range of cut-off frequencies. So introducing 4 averaging filters (or replacing each one-pole filter by a one-pole one-zero lowpass filter) reduces the optimal delay to around 1 sample, which is the smallest possible dalay anyway. Not tested yet, but it might be an alternative.

    posted in technical issues read more
  • manuels

    @elwinbran said:

    As for your more recent message, it will take me a bit more time to fix the resonance fix patch, but I assume then that I can take that design and scale it up to more poles?

    I don't think that's possible, four poles is the maximum here. But considering what @oid said about higher order VCFs, you could put the feedback path around only part of the cascade. I didn't think about this possibility earlier. So maybe it's better to have only one feedback path in the 2x4 pole filter? I don't have any experience with this kind of digital filter design and you should keep in mind that it's highly unconventional! Better you don't tell anybody that it's supposed to sound like a Moog filter or something. :smirk:

    posted in technical issues read more
Internal error.

Oops! Looks like something went wrong!