• ddw_music

    you did spread misinformation about it

    So just correct it and move on.

    When people try to reinvent what already exists... I built this for the community. Ignoring existing tools and efforts misses the spirit of open source

    Both Pd and SC have a systemic problem wherein there is no good way for new users to know which extensions exist. Recent versions of Deken improve the situation somewhat for Pd, and there's a similar effort underway for SC, but "missing the spirit of open source" is quite a burden to lay on somebody who might have using the tool for just a couple of weeks or months.

    So I'm out of this thread. I like a lot of the stuff in else, really, and I wish I'd known about it from the start. (Btw "when it's there in plugdata already" -- when I started using Pd in classes, there was no plugdata and there was no pd-extended, and no way to discover ELSE by chance.)

    hjh

    posted in technical issues read more
  • ddw_music

    @porres said:

    nah, I'll just leave as it is, the object is already too much complicated and I don't know how to deal with it (if anyone has a suggestion, please let me know).

    Maybe like this? Instead of velocity --> envelope, derive a gate by way of [change]. Then multiply the envelope by the velocity value. The volume will change if the velocity changes on a slurred note. If you don't want that, it should be possible to close the spigot when slurring to a note, and open it only when a brand-new note is being played.

    pd-mono-midi-else-2.png

    hjh

    posted in technical issues read more
  • ddw_music

    @porres said:

    As for the discussion, if my objects are not working as expected, or if you have suggestions, you can open reports and requests.

    OK, sure.

    FWIW, I had developed my own methodology for this some years ago. At that time, I didn't know that [else/mono] existed.

    I was just sharing a way to handle this problem that worked for me. My intention wasn't to raise shortcomings with your library -- it was only, "this is how I do it."

    hjh

    posted in technical issues read more
  • ddw_music

    @porres said:

    well, [else/mono] has a built-in portamento... and I don;t know what is wrong with it or how it works with [else/adsr~] for you...

    I'm also sensing an unnecessarily punchy tone here. I won't engage with this.

    hjh

    posted in technical issues read more
  • ddw_music

    @leonardopiuu said:

    I began using plugdata a couple days ago and I'm making a simple mono synth

    Mono synths are not simple! lol

    MIDI's way of representing keyboard activity is not exactly convenient for mono use. So, I'm going to guess that the velocity and/or ADSR behavior is the result of a logic problem before reaching the ADSR.

    Especially based on this comment: "BUT the velocity isn't sent out from the pack unless I press at least TWO keys at the same time..."

    It's tricky. This was my solution, in the midimonoglide.pd abstraction:

    pd-midimonoglide.png

    Confused? Yeah. MIDI-to-mono logic is complicated enough that it will probably take you more than a couple of days of plugdata usage to really grasp it. There's no such thing as a "simple MIDI mono synth."

    So I'd suggest to use either [else/mono] -- you already have ELSE lib, so, nothing else to install -- or my abstraction (which helps with mono note glide -- but you might not need that at this point.

    Then, when you have properly cleaned up and filtered MIDI data, then the ADSR "should" be the easy part.

    The ELSE way -- note here that every note-on sends out a non-zero velocity, so the envelope will retrigger repeatedly. I'm not crazy about that behavior, but it might be OK for your use case (and the patch is simpler).

    midi-mono-by-else.pd

    pd-mono-midi-else.png

    The hjh way -- where it becomes possible to distinguish between sliding, non-retriggering notes and genuinely new notes. (The [noteglide] isn't strictly necessary -- but, IMO fingered portamento is the whole point of a MIDI monosynth :wink: so I'm using it.)

    midi-mono-by-hjh.pd

    pd-mono-midi-hjh.png

    Hope this helps -- and no worries. If you made this much progress in Pd in a couple days, to the point where you run into the not-at-all obvious subtleties of MIDI mono handling, that's pretty good.

    hjh

    posted in technical issues read more
  • ddw_music

    Finally posted video from my electronic ensemble class's concert last June. Pd didn't figure into the audio side, but 2 out of 3 pieces used Gem for the graphical backdrop.

    It was a fun night, hope you enjoy!

    Link goes directly to the third piece; the playlist includes the other two.

    hjh

    posted in output~ read more
  • ddw_music

    Here's a short clip of an interactive installation piece (I guess more of a prototype?) that was shown last weekend.

    • Audio: SuperCollider (+ VSTPlugin for the piano and guzheng)
    • Graphics: Pure Data + Gem
    • UI: Open Stage Control (with heavy CSS gradient abuse :laughing: -- iPad batteries really don't like rendering 13-14 new gradients, 9 times per second)

    Musical decisions are made by flipping bits in a 40-bit word, and mapping various segments of the bits onto sequencing parameters.

    At some point, I'd like to do an explainer video, but not today.

    hjh

    posted in output~ read more
  • ddw_music

    Ahem.

    pd-mod-again.png

    lol

    hjh

    posted in technical issues read more
  • ddw_music

    @jameslo said:

    @whale-av My issue is (was?) that there are things that affect the DSP sort order that you can't see from the graphics alone. As far as I understand, that's different from @ddw_music's question, which is "how much of the DSP graph does Pd have to resort when there's a change"? That said, I don't understand the reason that @spacechild1 gave on the mailing list--I'd love to see an example.

    Well, here's what he said: "The issue is that DSP objects can be 'connected' by other means. Take the delay objects, for example. Some of these objects need to be aware of the order of execution within the DSP graph. Others will affect other objects, e.g. automatically changing the channel count. Pd itself doesn't know what a particular ugen is doing, so the only thing it can do is rebuild the whole graph."

    @whale-av Yes, I'm writing a paper, but the paper isn't about Pd graph sorting -- this is a side point -- the latency thread is interesting but would be way too much detail for basically a footnote.

    hjh

    posted in technical issues read more
  • ddw_music

    Per spacechild1 on the mailing list: Yes, it does sort all dsp objects in all canvases, every time (because they may have invisible connections, like delay objects or send~ / receive~ / throw~ / catch~).

    hjh

    posted in technical issues read more
  • ddw_music

    @jameslo said:

    @ddw_music If Pd's topological sort algorithm were smart enough to know when a change inside an abstraction did not change the sort outside the abstraction, then it would be an easy lift for it to detect that feedback to some abstraction (e.g. one with an [inlet~], an [outlet~], and no connections) does not produce a cycle. But the last I checked, that was not the case, so I would bet that when anything changes, the whole directed graph is resorted.

    I guess the issue, then, is, if all the tilde objects get lifted into one flat list, then a change inside a subcanvas could get inserted in the midst of objects outside the canvas. In that case, it probably is necessary to walk the entire tree.

    In the video, he starts off the sorting section by saying that canvases tell the DSP layer what has changed locally within the canvas, but then discusses the sorting flow when DSP gets turned on (which obviously has to start at the topmost canvas).

    And then in g_canvas.c there are comments like

            /* create a new "DSP graph" object to use in sorting this canvas.
            If we aren't toplevel, there are already other dspcontexts around. */
    

    so the data structure does seem to be split up by canvases.

    It's not a crucial point -- just that, I'm expending a paper and wanted to contrast SC's per-SynthDef graph sorting vs Pd's seemingly global sorting. I'd rather not make a false claim.

    Mailing list, I guess.

    hjh

    posted in technical issues read more
  • ddw_music

    @lacuna said:

    Maybe the answer is in this video of Millers classes at 1:04:00

    OK, will have a look, thanks!

    hjh

    posted in technical issues read more
  • ddw_music

    I always had the impression that Pd reschedules and re-sorts all DSP objects in the system globally whenever a change is made... is that true?

    Wondering because I could also imagine that subpatches or abstractions could reduce the scope of the DSP design that has to be analyzed at any moment.

    That is, if I'm in a subpatch and I make or delete a DSP connection, in theory it could just re-sort the DSP defined within the subpatch, but internally it might just redo everything anyway.

    Purely a matter of curiosity. I don't have a concrete use case.

    Thanks,
    hjh

    posted in technical issues read more
  • ddw_music

    @jameslo said:

    FYI, an algorithm I was taught in school uses a heap-based priority queue for that B array.

    Oh nice, I overlooked that one... the limitations of having zero formal training in CS. I actually have a pure vanilla float-heap abstraction already (https://github.com/jamshark70/hjh-abs)... lower values to the top. For descending order, I'd just negate on the way in and out.

    hjh

    posted in abstract~ read more
  • ddw_music

    @lacuna said:

    Still I don't understand your idea of building a list without rescanning the array or list for each peak?

    Sure. To trace it through, let's take a source array A = [3, 5, 2, 4, 1] and we'll collect 3 max values into target array B.

    The [array max] algorithm does 3 outer loops; each inner loop touches 5 items. If m = source array size and n = output size, it's O(m*n). Worst case cannot do better than 15 iterations (and best case will also do 15!).

    My suggestion was to loop over the input (5 outer loops):

    1. Outer loop i = 0, item = 3.
      • B is empty, so just add the item.
      • B = [3].
    2. Outer loop i = 1, item = 5.
      • Scan backward from the end of B to find the smallest B value > item.
        • j = B size - 1 = 0, B item = 3, keep going.
        • j = j - 1 = -1, reached the end, so we know the new item goes at the head.
      • Slide the B items down, from the end to j+1.
        • k = B size - 1 = 0, move B[0] to B[1].
        • k = k - 1 = -1, stop.
      • Now you have B = [3, 3].
      • Put the new item in at j+1: now B = [5, 3].
    3. Outer loop i = 2, item = 2.
      • Scan backward from the end of B to find the smallest B value > item.
        • j = B size - 1 = 1, B item = 3, found it!
      • There's nothing to slide (j+1 is 2, past array end), so skip this step.
      • B hasn't reached the requested size, so just add the item.
      • Now B = [5, 3, 2].
    4. Outer loop i = 3, item = 4.
      • Scan backward from the end of B to find the smallest B value > item.
        • j = B size - 1 = 2, B item = 2, keep going.
        • j = j - 1 = 1, B item = 3, keep going.
        • j = j - 1 = 0, B item = 5, found it!
      • Slide the B items down, from the end to j+1.
        • Now B is full, so start with second-to-last, not the last.
        • k = size - 2 = 1, move B[1] to B[2]: [5, 3, 3].
        • k = k - 1 = 0, this item shouldn't move (k == j), so, stop.
      • Put the new item in at j+1: now B = [5, 4, 3].
    5. Outer loop i = 4, item = 1.
      • Scan backward from the end of B to find the smallest B value > item.
        • j = size - 1 = 1, B item = 3.
        • B is full, and B's smallest item > source item, so there is nothing to do.

    Finished, with B = [5, 4, 3] = correct.

    This looks more complicated, but it reduces the total number of iterations by exiting the inner loop early when possible. If you have a larger input array, and you're asking for 3 peak values, the inner loop might have to scan all 3, but it might be able to stop after 2 or 1. Assuming those are equally distributed, it's (3+3+3) / 3 = 3 in the original approach, vs (3+2+1) / 3 = 2 here (for larger output sizes, this average will approach n/2). But there are additional savings: as you get closer to the end, the B array will be biased toward higher values. Assuming the A values are linearly randomly distributed, the longer it runs, the greater the probability that an inner loop will find that its item isn't big enough to be added, and just bail out on the first test, or upon an item closer to the end of B: either no, or a lot less, work to do.

    The worst case, then, would be that every item is a new peak: a sorted input array. In fact, that does negate the efficiency gains:

    a = Array.fill(100000, { |i| i });
    -> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, ... ]
    
    // [array-maxx] approach
    bench { f.(a).postln };
    time to run: 4.2533088680002 seconds.
    
    // my approach, using a primitive for the "slide" bit
    bench { g.(a).postln };
    time to run: 3.8446869439999 seconds.
    
    // my approach, using SC loops for the "slide" bit
    bench { h.(a).postln };
    time to run: 7.6887966190002 seconds.
    

    In this worst case, each inner loop has to scan through the entire B array twice: once to find that the new item should go at the head, and once again to slide all of the items down. So I'd have guessed about 2x worse performance than the original approach -- which is roughly what I get. The insert primitive helps (2nd test) -- but the randomly-ordered input array must cause many, many entire inner loops to be skipped, to get that 2 orders of magnitude improvement.

    BUT it looks like Pd's message passing carries a lot of overhead, such that it's better to scan the entire input array repeatedly because that scan is done in C. (Or, my patch has a bug and it isn't breaking the loops early? But no way do I have time to try to figure that out.)

    hjh

    posted in abstract~ read more
  • ddw_music

    Tried again, as an exercise.

    Failed.

    [array max] is so fast that any alternative will be slower. (Or, array indexing in Pd is so much slower than in SC that it pays off to avoid it.)

    And I had some bug where it omitted a value that it should have kept.

    So I'm done. If I need to do something like this again another time, I'd have to use a scripting object. Too frustrating.

    Here's as far as I got (with a little debugging junk still left in) -- array-maxx-benchmarking-2.pd -- any other insight would be nice, I guess? For closure.

    hjh

    posted in abstract~ read more
  • ddw_music

    OK, so, some benchmarking results.

    In Pd, given a 100000-element input array and asking [array-maxx] for 1000 output values, I get what seems to be is a valid result in 99 ms... which is not very slow. (Pd gains performance here from the fact that [array max] is a primitive. If you had to run the array-max iteration by [until], it would be much, much slower.)

    I did try it in SC. The [array-maxx] algorithm is extremely, brutally slow in SC because SC doesn't have an array-max primitive. The 100000 x 1000 = 100,000,000 iterations take about 3.6 seconds on my machine. Ouch.

    The algorithm I described (iterate over the source array once, and insert items into the output array in descending order) is 5-6 times faster than Pd, and a couple hundred times faster than the SC brute force way. I did two versions: one uses the SC array insert primitive; the other uses only array get and put.

    a = Array.fill(100000, { 1000000.rand });
    
    (
    f = { |a, n = 1000|
        a = a.copy;
        Array.fill(n, {
            var index = a.maxIndex;  // Pd equivalent [array max] is much faster
            var value = a[index];
            a[index] = -1e30;
            value
        });
    };
    
    g = { |a, n = 1000|
        var out;
        var index;
        a.do { |item|
            if(out.size == 0) {
                out = [item];
            } {
                index = out.size - 1;
                while {
                    index >= 0 and: {
                        out[index] < item
                    }
                } {
                    index = index - 1
                };
                // now index points to the latest item in the array >= source item
                if(index < (out.size-1) or: { out.size < n }) {
                    // insert after 'index' and drop excess smaller items
                    out = out.insert(index + 1, item).keep(n)
                }
            }
        };
        out
    };
    
    
    h = { |a, n = 1000|
        var out;
        var index, j;
        a.do { |item|
            if(out.size == 0) {
                out = [item];
            } {
                index = out.size - 1;
                while {
                    index >= 0 and: {
                        out[index] < item
                    }
                } {
                    index = index - 1
                };
                if(index < (out.size-1) or: { out.size < n }) {
                    // index = index + 1;
                    if(out.size < n) {
                        // if we haven't reached n output values yet,
                        // add an empty space and slide all items after 'index'
                        j = out.size - 1;
                        out = out.add(nil);
                    } {
                        // if we already have enough output values,
                        // then 'item' must be greater than the current last output value
                        // so drop the last one
                        j = out.size - 2;
                    };
                    while { j > index } {
                        out[j+1] = out[j];
                        j = j - 1
                    };
                    out[index + 1] = item;
                }
            }
        };
        out
    };
    )
    
    // brute force O(m*n) way
    bench { f.(a).postln };
    time to run: 3.592374201 seconds.
    
    // single-iteration with array-insert primitive
    bench { g.(a).postln };
    time to run: 0.13782317200003 seconds.
    
    // single-iteration without array-insert primitive
    bench { h.(a).postln };
    time to run: 0.21714963900013 seconds.
    

    I might have another stab at it in Pd. Note that you can't directly compare the Pd and SC benchmarks because the "language engine" isn't the same -- there's no guarantee of getting Pd down to 20 ms -- but, can probably do better than ~100 ms.

    hjh

    posted in abstract~ read more
  • ddw_music

    Actually I tried it a bit sooner -- works fine.

    I also tried the single-array-iteration approach, and... managed only to lock up my computer, twice... at which point... I see clearly why many really interesting projects in patching environments embed JavaScript or Lua, because the type of algorithm that I described is straightforward in code, but something close to horrifying to try to build with pure patching objects. Patching is great for UI response but for algorithms... there's a point where I just don't have time for it, unless it's really critical.

    I'm still curious about the performance comparison, but I can do it in 15-20 minutes in SuperCollider, whereas tonight I was at it for more than an hour in Pd and no good result. This is not an essential inquiry so I won't continue this way 🤷🏻‍♂️

    hjh

    posted in abstract~ read more
  • ddw_music

    @lacuna Ah ok -- I guessed it would be easy to fix. Will try it tomorrow.

    Cheers!
    hjh

    posted in abstract~ read more
  • ddw_music

    @lacuna Cool idea --

    I was curious about performance, so I made a little benchmarking patch.

    Over to the right, there is a button to initialize the array to 100000 random values between 0 and 999999. In plugdata, this took 35-50 ms; in vanilla, about 14 ms. (2-3x performance degradation in plugdata is... a curious thing, but not relevant to the performance of the abstraction.)

    Then I ran [array-maxx x 1000] on it, and found that it took 3-5 ms to do the 1000 array scans... which I found confusing: why would 100,000,000 array accesses be an order of magnitude faster than 100,000 array writes? (Sure, it's 1000 times the [array max] primitive, but I didn't expect so much faster.)

    Then I had a look at the results, which were all -1e+30, your sentinel value. It's doing the right number of iterations, but not yielding good data.

    That makes me suspect that [array-maxx] might, on my machine, not be doing the work, and thus finishing early... a bug? Version mismatch?

    pd-array-maxx-test.png

    array-maxx-benchmarking.pd

    (Specifically, what I was wondering about performance is, which is faster? To scan the entire input array n times, with each scan in a primitive, or to scan the input array once and manipulate the output list per value? I might try that later, but I'd like to be sure [array-maxx] isn't malfunctioning first. I'd guess, if the input array is large and n is small, the second approach could be faster, but as n gets closer to the input array size, it would degrade.)

    hjh

    posted in abstract~ read more
Internal error.

Oops! Looks like something went wrong!