Using a low vector size, scheduler in overdrive, and audio interrupt uses a large amount of CPU, but it seems to be getting the results/accuracy I am after. Is there an equivalent high CPU high precision mode in PD, and how do I enable it?
Max's vector size/scheduler overdrive/audio interrupt equivalent?
- Scheduler in Overdrive-- Pd has no equivalent because there aren't any "low priority" events in Pd. For example, if you want to load a wav into an array, Pd doesn't doesn't trigger the next part of the patch until you have finished filling the array.
That's certainly a drawback that's been mentioned several times in the history of Pd, but it would be a poor solution to just categorize various outlets as either low or high priority and do threading based on the state of a configuration button. If you put the low priority computation in a different thread by clicking an "overdrive" button, then you would subtly break Pd's determinism. Another way to put that is all the trouble you've gone to by using [trigger] to make sure things fire in the right order is no longer _guaranteed_ to work. That's bad, especially for a programming language that uses diagrams specifically to show the order in which data flows through the program.
If you read the Max/MSP documentation carefully, you'll see that their description of what "Overdrive" does is essentially a restatement of, "Warning: this may break determinism in your patch."
The right solution would be to have objects like [soundfiler] and other heavy computation objects have an outlet fire a bang when they're done, like [del] and [pipe]. These objects don't fire in zero-logical time, and you learn how to use them accordingly and can therefore ignore whether or not there's a threaded implementation underneath. (At least that's my quick assessment of the situation.)
I/O Vector Size-- maybe "Delay" in the Pd audio settings? Not exactly sure about that one.
The Scheduler in Audio Interrupt-- again, not totally sure, but I think Pd's message dispatcher sends messages every block. By default the block size is 64 samples-- you can increase or decrease it in a subpatch using [block~], but some objects like [bang~] seem to be hardcoded to 64. Anyhow, I think 64 is the smallest you can get in Max/MSP so Pd's default should be sufficient.
thanks for the info.
smallest in Max/MSP is 1. I like to use 8 because I am doing timing-critical processing, it needs to be nearly sample accurate.
Also, why is it a drawback that PD doesn't trigger the next part of the patch until the array has been filled? Seems like a good thing.
Well, imagine that the array is like a turntable, and loading it with samples from a soundfile is like putting a record on the turntable. In that case the patch author may not care about the array loading in a deterministic manner. He or she just does other stuff until it is loaded, then triggers a read once it is ready. That patch author would surely prefer a longer load time to audio dropouts. But Pd doesn't let you make that choice-- once it starts loading the soundfile it must finish before doing anything else. And if that means there's not enough time left to deliver audio to the backend before the next deadline, so be it.
In that case an "Overdrive" button that triggers [soundfiler] in a low priority thread might be helpful. However, that's a trivial example. As the patch gets more complex, determinism of course becomes more important for sanity and maintainability.
But that's mostly a theoretical example anyway, because reading a soundfile into an array will cause Pd to rebuild the DSP graph. For a virtual dj console patch of any complexity that would most likely cause a dropout.
So then... what about things happening simultaneously... and possibly sample-accurate? Is that never a guarantee on PD or Max?.. I guess Max has Gen so that might solve the problem.
Still not sure about Max/MSP's vector size... but if you need a chain of objects in your patch to do computations with a blocksize of 1, just use [block~ 1] inside a subpatch. The xlets feeding in and out will convert to/from the main patch's block size of 64.
SOO..... if you can just set block~1, then why do I see a lot of complaints about things not being sample-accurate in PD? Is that simply because some objects can go as low as 1 block, and others can't?
There may be several reasons for those complaints:
* the frustrated patch author may be passing data between the control and signal domain, which has a more complicated dataflow than a patch where the data stays in the signal domain
* it takes more CPU to compute 64 blocks in a subpatch with a [block~ 1] than it does to compute a single block of 64 samples. For complex patches, throwing everything in a [block~ 1] subpatch could very quickly eat up all of your resources. (After all, if the two approaches were equally efficient you could just do everything in the control domain.)
Actually I'm not sure about that part in the parentheses. I believe the signal graph is computed by iterating through an array of function pointers, while the control domain walks through a bunch of linked lists of outlets. I'd expect the former to have a lot less overhead.
For complex patches, throwing everything in a [block~ 1] subpatch could very quickly eat up all of your resources.
Sweet, because that's exactly what I want.
How much of an 8 core CPU can PD make use of? Seems like Max/MSP can't make use all my CPU.
Well, I'd suggest isolating only those parts of your patch that require a smaller block size, and putting them in a subpatch with [block~ 1]. Otherwise Pd will be doing 63 more function calls for every single signal object in your patch.
Pd doesn't do automated parallelism. You can spawn child Pd instances using the [pd~] object.
so I was really thinking I would not go and buy Max because PD is clearly more practical for what I want to do, but I've hit a roadblock... with block... pun intended.
I can't get sigmund to function below a hop size of 64. The hop size can only be as low as the block size. So I put [block~ 32] and set hop size to 32, and sigmund stops working, even though giving a print command reveals it is correctly set to a hop size of 32. If the [block~] is 64, and you put hop of 32, then sigmund shows a hop size of 0 (means it's REALLY not working).
ugh, so frustrating! I thought this would be the ONE thing I could depend on.
hmm, no attachment function? alright.
please see if this patch works, just hit the metro on button.
I just want to get sigmund ultimately to work at a hop size of 8. It works fine in max. :\
Edit: I'm getting this error if this helps at all
"conflicting block~ objects in same page
vd~: vd~: no such delwrite~"
Sorry, I've only used [sigmund~] in patches with the default block size. You should ask on the Pd mailing list.