@Polaris please don't use pd-extended in 2020... In the latest pd "vanilla" there is the option to run pd in batch mode from within pd using the fast-forward message. here is an example:
it fills an 10 sec. array instantly. Of cause all other processes will also be fast forwarded.
However, if you want to point out what you are planning to do in a wider context, i guess there is a better solution then filling arrays instantly. maybe..
@tungee Haas effect is already implemented in your brain, it's a psychoacoustic effect
I guess you want is a small delay between two output channels. You can do it with [pp.sdel~] samplewise delay.
something like this:
assuming you have the abstractions in your path, here is the patch:
if you change the delay times over time you can get some nasty phasing effects depending on the position of your speakers. it works okay with headphones though.
@Laevatein do you want to avoid externals in general?
my spoonful of wisdom if you want to do something like earplug~ with pd-vanilla objects only:
First you'll have to figure out how to do the panning between your channels. basically you have two options, vbap (vector based amplitude panning, read: https://ccrma.stanford.edu/workshops/gaffta2010/spatialsound/topics/amplitude_panning/materials/vbap.pdf) or dbap (distance based amplitude panning, read http://www.pnek.org/wp-content/uploads/2010/04/icmc2009-dbap.pdf).
Actually it's just triangulation, not too complicated stuff. You'll find many open source code examples online, including pd-patches, e.g. [pan8~] in the else library or my humble implementation [pp.spat8~] in audiolab are dbap. Both are available in the deken repos (help -> find externals)
The convolution part is a bit more complicated. You can do it in pd-vanilla, there are some examples in Alexandre Torres Porres live electronics tutorial, which is now part of the else library i think. But if you care about latency or if your IR files are larger than a few hundred milliseconds you'll have to use partitioning. Again, you can find examples online. Tom Erbe shared something a few years ago, there are patches in else, in audiolab ... Problem is though if you're doing the convolution part in pure data and you'll need to convolve with say 8-16 stereo impulse responses to get a decent binaural effect... it will bring your modern day computer to it's knees. You can try to outsource the convolution part to different instances of pd with [pd~] but I think it makes much more sense to do it with tools that are better suited for this task. jconvolver comes to mind if you're using jack.
good luck with your project!
pushed an update to deken (v 0.3).
changes since the last version:
pp.fft-freeze~ - a spectral freezer based on I08.pvoc.reverb.pd
pp.ladder~ - a "moog" 4th order ladder filter. I made this on out of curiosity about the "zero delay filter" design. I wouldn't recommend to use it though since we have [bob~] and cpu usage is pretty high! (except if you need a fast modulating resonant hp or bp filter for whatever reason).
pp.echo~ - an "analog" tape delay-ish delay echo effect.
pp.xycurve - draws a xy-bezier curve and reads from it. Useful for all kinds of musical gestures..
Many bugfixes and smaller changes. Be aware that some object might behave a little bit differently to the last version!
@Gabriel-Lecup check out [pd~]!
also, you should optimize your system for audio if you haven't:
Now this is a little bit off topic, you don't have to follow all the steps from the link above, but I highly recommend to install rtirq-init (https://github.com/rncbc/rtirq ) and indicator-cpufreq (http://manpages.ubuntu.com/manpages/focal/man1/indicator-cpufreq.1.html)
@jameslo ohoh, you stumbled across something here...
first, @lacuna is not exactly right, the delay lines give you one sample delay as expected since the blocksize in the subpatch is 1.
so what is wrong with the patch?
here is an example:
spot the difference....... right, there is none!
The only difference between these two patches is that i created the "delwrite~/delread~ phase2Delay" after "phase1Delay" in the working example. (And i assume you created the "phase1Delay" after "phase2Delay" in the patch you shared)
Pd sorts the dsp processing of the delay lines in the order of creation. You can do the execution order thing to get it right:
..but you should complain about this in the pd-mailing list because this strange behavior of pd is not properly documented anywhere it seems.
by the way, the working patch generates some very nice psychedelic mandalas when you feed it with an impulse generator : )
@JamesWN offline rendering is possible with pd if you use the -batch flag.
here is an example patch:
it will render 20sec of 440hz sine tone to a "osc440.wav" file. run it with
pd -batch batchexample.pd
in your console.
if you want to study the patch be quick and delete the connection to the "pd quit;" message or it will quit pd after 20 sec...
@jameslo there are some hints in the helpfile. fexpr~ can not look back one sample if the blocksize/vectorsize is only one sample. To look back one sample it needs a block size of at least two samples.
By the way, there is no reason to use block~ with fexpr~ (except if you want to look back more then 64 samples).
@Nicolas-Danet Hi Nicolas, i'm very interested in spaghettis since you mentioned that the DSP is multi-threaded. I'm not a coder, so i can not really contribute anything, but i'd like to test. However, i need at least [clone] to do so. Are you planning to integrate it?
Anyway, i'm curious where you go with this project!
@jameslo the I.04 example operates on the magnitude as well. The only difference is that the powers are filtered by the noise-mask before the root.
yes, there is theory. In this case (amplitude measurement) it's plain signal theory. But you can think about it in less technical terms. e.g. the RMS thing is just a signal that is ring modulated by it self. So you end up with 2 frequency bands, one at double the original frequency and one at 0hz! The lowpass filters the high frequency band out and what's left is the 0hz signal or dc-offset or amplitude...
Now there is something like the hilbert~ allpass, which magically shifts the phase of your signal by 90 degrees, roughly speaking. That's very handy because if you multiply these signals by themselves, the phases of the high frequency bands are 180 degrees apart and if you add them together, they cancel each other out and you got the amplitude instantly, without the need for a lowpass filter. (the magnitude calculation in the fft examples works similar)
I think there are no rules if you have a basic understanding of what you're doing in DSP, it's all about artistic decisions and compromises. Do you need instantaneous amplitude detection i.e. fast attacks of your envelope follower in a vocoder, even if it's very cpu expensive? Maybe it sounds cool. But maybe avg~ is just good enough.
Am I right to think it's closer to your abs avg example?
can I assume that your rms example is equivalent to env~?
also yes, except that env~ output is in db units.
do you have any general guidance as to which measurement is most appropriate?
not really. Try all of them and see what you like most. The peak method might be too cpu expensive if you have a lot of filter bands.
Katjas blog is a good read: http://www.katjaas.nl/compander/compander.html
Earlier on this forum I asked a similar question about the FFT examples I.04 - I.06, which all seem to use different amplitude measurements for reasons I don't understand.
couldn't find this post. Actually they are all the same...
@jameslo the differences are explained in the help files. Maybe what you expected from cyclone/avg~ is "peak to peak" amplitude?
I'd suggest not to use either of the objects and use signal objects instead:
the averaging is handled by the lowpass filter. [lop~ 1] averages over 1 second, [lop~ 10] one tenth of a second and so on.