• ddw_music

    @lead said:

    So what I actually mean is I want to minimise the error in a BPM change of two semitones, and find the start and end values where it's lowest, even if they're both fractions. I think?

    Yes -- you'd have an error value for the starting bpm, and another error value for bpm * ratio, and you would want to minimize the sum.

    The tricky thing mathematically is that the error function is piecewise -- every time you jump to the next round in point, it's a different segment. So calculus-based methods for continuous functions couldn't be used directly.

    Playing around with it in SuperCollider, though (not Pd, for two reasons: 1/ SC has double-precision floats, 2/ Pd is clumsy at array math):

    ~error = { |a, b, factor = 0.001|
        (a absdif: a.round(factor))
        +
        (b absdif: b.round(factor))
    };
    r = -2.midiratio;
    
    f = 69;
    
    // let's test 0.005 bpm on either side .. 1001 total
    a = (f - 0.005, f - 0.00499 .. f + 0.005);
    
    b = a * r;  // slower bpms
    
    c = ~error.(a, b);  // SC auto-expands math ops to arrays!
    
    c.plot;
    
    c.minIndex;  // 500
    a[c.minIndex]  // 68.999999999994 --> 69
    

    So your "good" one is as good as that range is going to get.

    Now, if you do the same thing for f = 77, the minIndex is 700 -- not 500 in the middle! -- and a[c.minIndex] = 77.001999999992 or basically 77.002.

    Then:

    ((77.002 * r) absdif: round(77.002 * r, 0.001)) = 1.6905725900074e-05

    ((77.000 * r) absdif: round(77.000 * r, 0.001)) = 0.00020129683781533

    So that shift of 0.002 bpm makes a huge difference.

    This test also shows there's no benefit in checking "original" bpm that are not rounded to 0.001. So change it to a = (f - 0.005, f - 0.004 .. f + 0.005);. EDIT: In that case, the "a" term in the ~error function will be 0 (if factor = 0.001) so you could simplify to:

    ~error = { |a, factor = 0.001|
        a absdif: a.round(factor)
    };
    
    // and...
    c = ~error.(b);
    

    ... which then does come back around to the problem that jameslo linked.

    It's a brute-force technique but should work for the purpose.

    hjh

    posted in technical issues read more
  • ddw_music

    @lead said:

    So, starting with the smallest number of decimal places and ending with the smallest number of decimal places is preferable, does that make sense?

    Formally, it doesn't make sense.

    The ratio for 2 semitones down is 2 ** (-2 / 12). 2 is a prime number. Raising any (positive) prime number to a fractional power results in an irrational number, with infinitely many decimal places (without ending up in a repeating sequence).

    A rational number times an irrational number must be irrational. So your initial bpm value * the ratio is irrational and has infinitely many decimal places. To say "this one has 3 places" glosses over the real situation.

    What you're really doing is rounding this irrational number to an arbitrary number of places. The denominator of the rounded number will be 10 ** num_places -- thus both the numerator and denominator are integers and the result is rational.

    The difference (or quotient, depending how you want to measure it) between bpm * ratio and the rounded version is an error value.

    And the math problem, then, is to minimize the error.

    You can see it more clearly if you use a language with double precision floats, e.g., SuperCollider:

    f = { |bpm, semitones = -2, places = 3|
        var r = 2 ** (semitones / 12);  // '.midiratio'
        var bpm2 = bpm * r;
        var rounded = bpm2.round(10 ** places.neg);
        // for easier comparison I'll "absolute-value" the error values
        var errorDiff = bpm2 absdif: rounded;
        var errorRatio = bpm2 / rounded;
        if(errorRatio < 1) { errorRatio = 1 / errorRatio };
        [bpm2, errorDiff, errorRatio]
    };
    
    f.value(69);  // [ 61.472011551683, 1.1551683407163e-05, 1.0000001879178 ]
    f.value(59);  // [ 52.56302437028, 2.4370280016228e-05, 1.0000004636394 ]
    f.value(77);  // [ 68.599201296806, 0.00020129680612513, 1.0000029343985 ]
    

    ... where, indeed, the error for 77 * r is about an order of magnitude worse.

    @jameslo -- "It would be cool if you were asking https://math.stackexchange.com/questions/2438510/can-i-find-the-closest-rational-to-any-given-real-if-i-assume-that-the-denomina "

    I think this is exactly what the problem reduces to -- what is the closest rational to x where the denominator = 1000. (However, the numerical methods in the thread will likely evaluate better with double precision floats. Pd prints 6 digits so it may not be clear what you're seeing.)

    Actually something else... if x is the original bpm and y is the adjusted, find x and y where the error is minimized. So far the assumption is that x is an integer, but maybe the error could be even lower if they're both fractions.

    hjh

    posted in technical issues read more
  • ddw_music

    One word: PlugData.

    https://github.com/plugdata-team/plugdata/releases

    Works seamlessly with Reaper and I've no reason to believe it would be any worse with Ableton.

    It's based on Camomile, just that the hard stuff has already been done for you.

    hjh

    posted in technical issues read more
  • ddw_music

    @timothyschoen said:

    I'm interested in fixing some of the problems you're describing with [playhead], I agree that the outputs are confusing and unhelpful, so it might be worth it to add a new object, with a similar interface to your [playhead-tick] object, maybe call it [beat]?

    Sure, that would be nice.

    Actually I wasn't using [playhead] (though it might be easier if I were) -- I'm reading the raw messages from [r playhead].

    A couple of things that [playhead-tick] does that I find really crucial:

    • The beat triggers are generated by anticipating the beat (position --> [wrap] --> [moses a_threshold] where the default threshold is 0.9 -- then take 1 - the wrapped (fractional) part of the position and delay. This is because the resolution of the position values is limited by the hardware block size. The anticipate-and-delay approach should be on time for any hardware buffer configuration (and if it doesn't, just allow more of a gap: [playhead-tick 0.2] --> threshold = 0.8).

    • I found it very useful to know not only that playback has started or stopped, but at what time. My [playhead-tick] did some tap dancing to get it. With [playhead] it would be:

    playAtBeat-from-playhead.png

    I could also replace the regular [playhead] object, though it might be nice to leave it in for backward compatibility, compatibility with Camomile, and because it provides a lot of low-level information in case you need that.

    I think it's probably better to have both.

    Your tutorial made me realise that I'm sending the playhead data every Pd audio block (64 samples in plugdata) instead of every DAW audio block, which is useless!

    I agree. There is not much benefit in getting the same position value 4, 8 or 16 times.

    (Is it possible to adjust this for [r playhead] too?)

    hjh

    posted in tutorials read more
  • ddw_music

    New video tutorial, ending with a section on PlugData (Pd as a VST plugin in a host).

    https://forum.pdpatchrepo.info/topic/14244/video-tutorial-pd-sync-with-ableton-link-and-with-daw-plugdata

    (PlugData was mentioned above... it is COOOOOOOOL, apart from issues with some external packs, it very nearly blows away all the other approaches.)

    hjh

    posted in tutorials read more
  • ddw_music

    Been working on this for awhile, only this week had a chance to record the material.

    [abl_link~] (in Deken) has existed for awhile for Ableton Link sync. PlugData is more recent for Pd to run in a DAW, with timeline sync.

    With both approaches, however, you'll crash into a handful of problems whose solutions are not at all obvious. This project is all about: What does it really take to have Ableton Link, or DAW timeline, sync working properly. 49 minutes :-o a bit longer than I expected.

    (This is exposing some rough edges in Pd's interfaces. Unfortunately rough edges can end up being a feedback loop -- something is hard, so people avoid it, and as a result, it remains hard and people continue to avoid it. I hope this will break the feedback loop in some crucial places and make these types of sync more approachable.)

    hjh

    PS Note one correction in the video's description.

    posted in tutorials read more
  • ddw_music

    @seb-harmonik.ar said:

    @ddw_music I guess I would say that might be a use of an argument to specify a name, but I can see now why you might want it dynamically settable via symbol box or something.

    There's one reason why the name can't be set only by an argument. Step 1/ Create the object with one name. 2/ Set the gain by GUI. 3/ Edit the object text to change the name by argument... now the gain reverts to the initial level and your carefully arranged mix is destroyed. (I tested this in a simplified case.)

    I meant: if you restructure the dsp graph, I think the audio graph will stop and start whether you do it manually or not, while it gets recomputed. And I think that applies to changing send/receive/throw/catch names.

    I don't object to this. I'd think it's a good solution.

    Slightly OT: the reality of the situation is that Pd is not as ideal as supercollider for live-coding, since so much stuff happens on the audio thread...

    Right, and the video is a bit misleading in suggesting that live patching is the goal. What I'm really after is improved usability. For instance, in my class demos I often use ezoutput~ but it resets to -inf gain when you load the patch, so I'm constantly guessing what volume level in a student's homework will not destroy my ears. My fader~ uses savestate, so, you put mixers in the top level patch, and saving the patch also saves your mix. Also you should see the truly insane things students come up with when they're trying to patch effects in... but I'm providing a structure that is conceptually the same as send/return buses. Students often get this wrong too, but it's really nice to have that send trim control just right there, following a time-tested design... once you understand how to implement that design with these objects, then you don't waste time considering bad signal flow designs.

    Why shouldn't we have nice things in Pd? It may not be MSP's priority but I tend to think that usability matters.

    I think you could still get around dynamic patching if you used a bunch of global state to lookup static abstraction names/templates...

    I'm not sure what that would look like. In any case, the dynamic patching works pretty well, apart from spurious error messages while savestate is updating the objects. (I complained about that but was told pretty clearly that even a momentary improper state shouldn't be tolerated under any circumstances, even if you know what you're doing and your patch will fix it automatically.)

    hjh

    posted in technical issues read more
  • ddw_music

    @seb-harmonik.ar "it seems like the reason is to avoid a block delay?"

    I really wish he had elaborated on that.

    In SuperCollider, I have an example where the InFeedback UGen produces inconsistent results depending on the evaluation order: https://scsynth.org/t/supernova-groups-threads-ndef/1915/4

    This might be what MSP is thinking of: the choice is either to impose a consistent block delay (breaking backward compatibility) or to risk inconsistent behavior depending on scheduler ordering of audio objects.

    What he doesn't explain is why name changes have anything to do with this.

    I guess I will have to join the mailing list and ask for clarification... didn't really want to do that because I don't need every thread coming into my inbox, but that seems to be where this is being discussed.

    My use case is a modular mixing framework.

    You create [fader~ myCoolSynth] (using the default target name "main"). Now there are:

    • [catch~ myCoolSynth-auxL] and [catch~ myCoolSynth-auxR]
    • [throw~ main-auxL] and [throw~ main-auxR]
    • [send~ myCoolSynth-preL] and [send~ myCoolSynth-preR] (pre-fader sends)
    • [send~ myCoolSynth-postL] and [send~ myCoolSynth-postR] (post-fader sends)

    Now you change your mind and you want to call the channel "lead." So you go to the name symbol box (because, in a DAW mixer, you can change the name of the channel by [double-]clicking and typing), type "lead" <ret> and the catch and send objects need to change name... but this is not supported in vanilla.

    spacechild1 had suggested dynamic patching for this, and was generous enough to make dcatch~ and dsend~ abstractions for me... ext13 may be fine as well, although the mailing list thread points out that it's 10 years old without recent updates.

    (but if you're dynamic patching signal objects that's the case anyways I think)

    I found in practice with my mixer objects that sometimes it's necessary to reset DSP, but not always.

    hjh

    posted in technical issues read more
  • ddw_music

    Related: Does anyone know the design reason for these limitations?

    • Multiple [throw~ a] but only one [catch~ a] is allowed (one and only one summing bus target.)
      • And you can set the throw~ name dynamically but catch~ is frozen forever.
    • Only one [send~ b] but multiple [receive~ b].
      • And you can set the receive~ name dynamically but send~ is frozen forever.

    What if I need to use the same sum-bus signal in multiple places? Sure, you can catch~ --> send~ but why the extra object?

    And what if you would like the user to be able to choose the routing at runtime? The lack of "set" messages for catch~ and send~ is a real bother here. I'm currently depending on dynamic patching magic from spacechild1 to get around this (adding a dependency on iemguts, and getting tons of errors when loading some patches, and devs were rather resistant to my use case).

    I would have to guess that Miller Puckette is trying to prevent some impossible case by imposing these limits, but I'm not insightful enough to see what those impossible cases are. Absent that, it looks like a case where Pd is just stubbornly getting in the user's way.

    hjh

    posted in technical issues read more
  • ddw_music

    @KMETE Requoting myself: "To crossfade properly, it needs to subtract 100 ms from the file's total duration."

    Let's say you have 10 seconds of audio.

    You want to loop it, with a 100 ms crossfade.

    If you just use "all" then it will do this:

    1. Start at 0 and play to 10 sec.
    2. Loop back to 0.
    3. At this point, for a cross fade, the step 2 audio fades in, and the step 1 audio fades out. But step 1 has already run out of audio.

    So at that point, you don't get a cross fade. You get an immediate jump to silence (maybe with a click), and then a fade in.

    It "seems to work" in that there is no error, nothing crashes, no major audio glitches. But it isn't crossfade looping.

    The solution here changes it to:

    1. Start at 0 and play to 9.9 sec.
    2. Loop back to 0.
    3. At this point, the step 2 audio fades in, and the step 1 audio fades out over the range 9.9 - 10 sec = clean crossfade.

    But if you're happy without a proper crossfade, then by all means, do what you feel is best.

    At this point, with apologies, I need to withdraw from the thread. I've already spent much more time on it than I expected, and the follow-up questions are becoming a bit like... not the best use of time. Like, I'm getting ready to shoot a YouTube tutorial on Pd external sync, and instead of working on those materials, I was explaining crossfading here. I think I need to strike a better balance.

    hjh

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!