This is a basic grain delay -- it is not a drop-in replacement for Shuffling because, as stated, I have no idea of Shuffling's parameters or what they control. But this shows the basic technique. You'd choose grain positions, playback rates etc. according to different rules. Also, here I've used a Hann envelope on the grains for smoother overlap. To shuffle segments, you'd want a trapezoidal envelope that is open at full volume most of its duration.
And yeah, I kind of want to know more how it works. . .
Hm, well, let me propose an analogy. Analog synthesis is fairly standard: bandlimited waveforms, there are x number of ways to generate those, y number of filter implementations etc. But many of the oscillators and filters in, say, VCV Rack have a distinctive sound, because of the specific analog-emulation techniques and nonlinearities used per module. You can understand analog synthesis but that isn't enough to emulate a specific Rack module in Pd.
Re: Shuffling, I finally found this one sentence description: "Shuffling takes random sample fragments of variable dimensions from the last three seconds of the incoming sound and modulates its playback density and pitch" -- that's a granular delay.
A granular delay is fairly straightforward to implement in Pd: [delwrite~] is the grain source. Each grain is generated from [delread4~] where you can randomly choose the delay time, or sweep the delay time linearly to change the pitch. That will take care of "random sample fragments," "last three seconds," and "modulates... pitch" (you modulates playback density by controlling the rate at which grains are produced vs the duration of each grain -- normally I set an overlap parameter and grains-per-second, so that grain dur = overlap / grain_freq).
"... of variable dimensions" doesn't provide any useful technical detail.
But what isn't covered in the overview description of a granular delay is the precise connection between the Shuffling plug-in parameters and the audio processing. Since GRM Tools are closed-source, you would have to get hold of the plug-in and do a lot of tests (but if you have the plug-in, then just [vstplugin~] and done), or guess and you would end up with an effect that's somewhat like Shuffling, but maybe not exactly what the composer specified.
I'll send a grain-delay template a bit later, hang on.
I am working on a new patch for a piece already made. The composer mentions a GRM plug-in called "Shuffling" for the granulation in Max.
I do not think I have GRM downloaded. Is that only for Max?
If I'm not mistaken, GRM Tools is a set of VST FX plugins. As such, they would not be restricted to Max. (In theory, you should be able to run it in Pd using IEM's vstplugin~ external.)
I'm pretty sure they are not free, though.
Unfortunately my Internet access is inconsistent and spotty today, so I'm not able to get any further details about Shuffling... Granular techniques in general are fairly straightforward in Pd, but it's the specifics about this plugin's input and behavior that I can't find at the moment (but someone else probably could). So I'm not sure how to emulate this specific granulator in Pd.
New to the program, running on a Lenovo Thinkpad, Windows 10, 64-bit version of Purr-Data (most recent). I am simply trying to play a video (AVI). I can create the window, open the file, see the video file show up in the window, but it does not play. When I toggle auto, the program crashes, forcing me to quit.
One recommendation is to install the K-Lite Codec Pack -- I can't find a direct link on this machine (regional search restrictions -- I can find "uptodown" style links but I don't trust them) but it should be easy to search. Gem depends on the codecs available on your system. If one fails, another might be successful.
With that said, though... it's with some regret that I have to say that the state of the art for graphics passed Gem by quite some time ago. I still use it in class because the students just need a basic smattering and it's good to stay within the Pd paradigm, but it's far less advanced than Jitter in Max/MSP ($) or vvvv (Windows only I believe, so you're in luck there -- free for noncommercial use) or Processing.
In Pd, there's also Ofelia, but get ready to write Lua code into the objects. It's much more performant than Gem but a lot harder to use.
PS There's a rant in there somewhere about that user handle but I haven't got energy for it now...
Seems like maxlib/arraycopy have a special implementation for copy entire arrays.
I think it depends on the number of elements being copied.
maxlib/arraycopy accepts start/size parameters with every copy request -- so it has to validate all the parameters for every request (https://github.com/electrickery/pd-maxlib/blob/master/src/arraycopy.c#L81-L166). This is overhead. The copying itself is just a simple for loop, nothing to see here.
[array get] / [array set], AFAICS, copy twice: get copies into a list, and set copies the list into the target.
So we could estimate the cost of arraycopy as: let t1 = time required to validate the request, and t2 = time required to copy one item, then t = num_trials * (t1 + (num_items * t2)) -- from this, we would expect that performance would degrade for large num_trials and small num_items, and improve the other way around.
And we could estimate the get/set cost as num_trials * 2 * t2 * num_items. Because both approaches use a simple for loop to copy the data, we can assume t2 is roughly the same in both cases. (Although... the Pd source de-allocates the list pointer after sending it out the [array get] outlet so I wonder if the outlet itself does another copy! No time to look for that right now.)
If you have a smaller number of trials but a much larger data set, then the
forloop cost is large. Get/set incurs the
forcost twice, while arraycopy does it once = arraycopy is better.
In the tests with 100000 trials but copying 5 or 10 items, the
forcost is negligible but it seems that arraycopy's validation code ends up costing more.
@KMETE I think though, that if you want the data to arrive on time, you have to incorporate a [spigot]
If I'm reading this correctly, it would do the opposite of what the OP specifies.
The OP says, we're starting at 1. Then at some point, a 0 comes in and a 1 very quickly after it. In that case, the 0 should be suppressed.
The debounce.pd abstraction would pass the 0, and then suppress the 1.
The requirement is to pass through the last of a rapid succession of values, which I believe my [delay] patch does. (Also, with the requirement as stated, it's impossible to do "on time" because you have to wait to see if another value will come in quickly.)
Last semester, I was giving a class demo about video camera input in Pd.
For a single blob, pix_blob does reasonably well (though, as it's a single weighted average, everything tends toward the center).
"... movement of fireflies" implies tracking multiple points.
Gem has pix_multiblob, but on my system, this was unacceptably slow. YMMV depending on hardware, image resolution etc., but in my case, it simply wasn't usable.
So instead, for the demo, I found a Processing sketch online for multiblob tracking, and added a function to send Open Sound Control messages to Pd. Worked a treat.
Ofelia is likely to be more performant than Gem, but it's much harder to use.
I would actually prefer to just use the equation inside an expression object
Without testing, I believe the SC lincurve formula could be written into expr as:
if(abs($f6) < 0.001, ($f1 - $f2) / ($f3 - $f2) * ($f5 - $f4) + $f4, $f4 + (($f5 - $f4) / (1.0 - exp($f6))) - ((($f5 - $f4) / (1.0 - exp($f6))) * pow(exp($f6), ($f1 - $f2) / ($f3 - $f2))))
$f1 = input
$f2 = input lower bound
$f3 = input upper bound
$f4 = output lower bound
$f5 = output upper bound
$f6 = curve