-
ddw_music
OK, so, some benchmarking results.
In Pd, given a 100000-element input array and asking [array-maxx] for 1000 output values, I get what
seems to beis a valid result in 99 ms... which is not very slow. (Pd gains performance here from the fact that [array max] is a primitive. If you had to run the array-max iteration by [until], it would be much, much slower.)I did try it in SC. The [array-maxx] algorithm is extremely, brutally slow in SC because SC doesn't have an array-max primitive. The 100000 x 1000 = 100,000,000 iterations take about 3.6 seconds on my machine. Ouch.
The algorithm I described (iterate over the source array once, and insert items into the output array in descending order) is 5-6 times faster than Pd, and a couple hundred times faster than the SC brute force way. I did two versions: one uses the SC array insert primitive; the other uses only array get and put.
a = Array.fill(100000, { 1000000.rand }); ( f = { |a, n = 1000| a = a.copy; Array.fill(n, { var index = a.maxIndex; // Pd equivalent [array max] is much faster var value = a[index]; a[index] = -1e30; value }); }; g = { |a, n = 1000| var out; var index; a.do { |item| if(out.size == 0) { out = [item]; } { index = out.size - 1; while { index >= 0 and: { out[index] < item } } { index = index - 1 }; // now index points to the latest item in the array >= source item if(index < (out.size-1) or: { out.size < n }) { // insert after 'index' and drop excess smaller items out = out.insert(index + 1, item).keep(n) } } }; out }; h = { |a, n = 1000| var out; var index, j; a.do { |item| if(out.size == 0) { out = [item]; } { index = out.size - 1; while { index >= 0 and: { out[index] < item } } { index = index - 1 }; if(index < (out.size-1) or: { out.size < n }) { // index = index + 1; if(out.size < n) { // if we haven't reached n output values yet, // add an empty space and slide all items after 'index' j = out.size - 1; out = out.add(nil); } { // if we already have enough output values, // then 'item' must be greater than the current last output value // so drop the last one j = out.size - 2; }; while { j > index } { out[j+1] = out[j]; j = j - 1 }; out[index + 1] = item; } } }; out }; ) // brute force O(m*n) way bench { f.(a).postln }; time to run: 3.592374201 seconds. // single-iteration with array-insert primitive bench { g.(a).postln }; time to run: 0.13782317200003 seconds. // single-iteration without array-insert primitive bench { h.(a).postln }; time to run: 0.21714963900013 seconds.
I might have another stab at it in Pd. Note that you can't directly compare the Pd and SC benchmarks because the "language engine" isn't the same -- there's no guarantee of getting Pd down to 20 ms -- but, can probably do better than ~100 ms.
hjh
-
ddw_music
Actually I tried it a bit sooner -- works fine.
I also tried the single-array-iteration approach, and... managed only to lock up my computer, twice... at which point... I see clearly why many really interesting projects in patching environments embed JavaScript or Lua, because the type of algorithm that I described is straightforward in code, but something close to horrifying to try to build with pure patching objects. Patching is great for UI response but for algorithms... there's a point where I just don't have time for it, unless it's really critical.
I'm still curious about the performance comparison, but I can do it in 15-20 minutes in SuperCollider, whereas tonight I was at it for more than an hour in Pd and no good result. This is not an essential inquiry so I won't continue this way 🤷🏻♂️
hjh
-
ddw_music
@lacuna Cool idea --
I was curious about performance, so I made a little benchmarking patch.
Over to the right, there is a button to initialize the array to 100000 random values between 0 and 999999. In plugdata, this took 35-50 ms; in vanilla, about 14 ms. (2-3x performance degradation in plugdata is... a curious thing, but not relevant to the performance of the abstraction.)
Then I ran [array-maxx x 1000] on it, and found that it took 3-5 ms to do the 1000 array scans... which I found confusing: why would 100,000,000 array accesses be an order of magnitude faster than 100,000 array writes? (Sure, it's 1000 times the [array max] primitive, but I didn't expect so much faster.)
Then I had a look at the results, which were all -1e+30, your sentinel value. It's doing the right number of iterations, but not yielding good data.
That makes me suspect that [array-maxx] might, on my machine, not be doing the work, and thus finishing early... a bug? Version mismatch?
(Specifically, what I was wondering about performance is, which is faster? To scan the entire input array n times, with each scan in a primitive, or to scan the input array once and manipulate the output list per value? I might try that later, but I'd like to be sure [array-maxx] isn't malfunctioning first. I'd guess, if the input array is large and n is small, the second approach could be faster, but as n gets closer to the input array size, it would degrade.)
hjh
-
ddw_music
@jamcultur It's not a bug -- it's normal behavior.
It's documented in the html manual: https://msp.ucsd.edu/Pd_documentation/resources/chapter2.htm#s2.4.2
2.4.2. Depth first message passing
Whenever a message is triggered in Pd, the receiver may then send out further messages in turn, and the receivers of those messages can send yet others. So each message sets off a tree of subsequent messages. This tree is executed in depth first fashion."Depth first" means that, when a value comes out of an object's outlet, that branch of the tree must continue all the way to its end before the same outlet can go down a different branch, or before the same object can send data out of a different outlet.
Take a simpler example:
When the random number goes down the
[* 2]
branch, it must continue down to the [print secondValueCalculated] before it can advance down the [print firstValueCalculated] branch. The whole[* 2]
branch must 100% complete.Humans might look at this patch and think, "Well, the 'print first' branch is simpler, so, intuitively it should be done first." Computers don't think like that.
In code (I'll use SC), it would look like:
( ~func1 = { var number = 10.rand; ~func2.value(number); "first value = %\n".postf(number); }; ~func2 = { |number| var result = number * 2; "second value = %\n".postf(result); result }; ~func1.value; ) -> second value = 14 first value = 7
~func1 specifies to do ~func2 first, then print. The Pd patch is the same.
I wish Pd's documentation called more attention to looping formulas that Just Work. Users often end up just trying to figure it out for themselves and getting tied in knots.
A
for
loop goes like this:for(i = 0; i < count; i++) { loop body }
That is, in C, there's a standardized way to write a counting loop -- following the formula avoids confusion. In Pd, there does exist (what should be) a standardized way to write a counting loop, and just like in C, following the formula avoids confusion -- but the formula isn't well enough known, and tricky for users to discover.
hjh
-
ddw_music
@atux said:
is there a way to remove all transients (except the first note) so that, by scrolling "pitch" box number up and down, you hear a single continuous glissando?
The typical way that MIDI synths support this is by setting it to mono mode and using "legato glide" or "fingered portamento."
I'm not sure if sfont~ supports that usage. If not, you're out of luck -- bc the only way for this to work is if the instrument can distinguish between a note-on message that should reattack and one that should glide. If the instrument's code doesn't make this distinction, then no amount of MIDI note mangling will make a difference.
Or you could try setting a wide pitch bend range, but that usually sounds pretty silly with samples. (It would be nice if samplers could crossfade between different key ranges when pitch bending, but I've never seen any sampler do that, not even Kontakt. Maybe they added that feature? I don't use samplers often enough to keep up with the very latest developments.)
hjh
-
ddw_music
Recently hit upon this approach to OSC handling -- pro: easily scalable by just adding more [r] keys; con: no way to print an error for unrecognized OSC paths.
Maybe it's old news for some, but I hadn't seen it before, and it made this recent patch a lot easier to write.
hjh
-
-
ddw_music
@porres said:
The other interpolators in wave~'s help file are all "hermite", and the one used by SuperCollider/ELSE/tabosc4c~ is "Catmull-Rom Spline". So, "Catmull-Rom" should be specific enough
SC source tree/include/plugin_interface/SC_SndBuf.h:
inline float cubicinterp(float x, float y0, float y1, float y2, float y3) { // 4-point, 3rd-order Hermite (x-form) float c0 = y1; float c1 = 0.5f * (y2 - y0); float c2 = y0 - 2.5f * y1 + 2.f * y2 - 0.5f * y3; float c3 = 0.5f * (y3 - y0) + 1.5f * (y1 - y2); return ((c3 * x + c2) * x + c1) * x + c0; }
Maybe JMc was wrong to call it "3rd-order Hermite (x-form)" but... he didn't develop this standard formula by himself, so I assume that (30 years ago for SC1 or SC2) he got it from some source that called it this.
I don't have any more time to look at this (project must be completed by next Tuesday) so... I have an lfdnoise3~ abstraction that will work for my use case, hence, I'm bowing out of this chat.
hjh
-
ddw_music
About those kinks --
lg-diff -- the slope of the Lagrange interpolation -- shows some gaps; at those places, the Lagrange interpolation will change direction suddenly. hm-diff may change direction at a sharp corner, but those corners link up. The fast oscillation toward the end of this one looks smoother to my eye in the Hermite version, where tabread4~ on the left looks uncomfortably close to straight line segments.
hjh