Is it possible to export a pd patch as a RTAS or VST plug-in?
-
Export patch as rtas?
-
i don't believe so, for a while there was pdVST which let you run an instance of Pd on a channel in a host, and run patches via that, but it hasn't been updated since 2004/2005 (and after just testing it doesn't want to work in ableton live)
-
You can try using jack to send a signal out to Pd and back in on a separate track, though you'll have to deal with some latency. And if your using Pro Tools HD I think you won't be able to take advantage of the proprietary TDM drivers.
If you're on OSX, jack can be used as an insert plug-in so you can avoid the separate tracks, but you still get the latency.
-
@Maelstorm said:
If you're on OSX, jack can be used as an insert plug-in so you can avoid the separate tracks, but you still get the latency.
which you can, depending on your host, eliminate by setting the track delay for the track with the plugin. So if the buffer is 512 sample/11.82 ms then set the delay to that and it should be spot on.
I've had the whole Jack graph latency explained to me numerous times by Stephane Letz and it still doesn't go in.....Heres what he told me...
> > Its the Pd > JAck > Ableton latency. (Ableton has otoh 3 different
> > ways of manually setting latency compensation - I'm just not very
> > clear on where to start with regards to input from JAck)
> >>>>
>
> This is NO latency introduced in a Pd > JAck > Ableton kind of
> chain; the JACK server activate each client in turn (that is Pd *then*
> Ableton in this case) in the *same* given audio cycle.
>
> Remember : the JACK server is able to "sort" (in some way) the graph
> of all connected clients to activate each client audio callback at the
> right place during the audio cycle. For example:
>
> 1) in a sequential kind of graph like IN ==> A ==> B ==> C ==> OUT,
> JACK server will activate A (which would typically consume new audio
> buffers available in machine audio IN drivers) then activate B (which
> would typically consume "output" just produced by A) , then activate
> C , then produce the machine OUT audio buffers.
>
> 2) in a graph will parallel sub-graph like : IN ==> A ==> B ==> C
> ==> OUT and IN ==> D ==> B ==> C ==> OUT (that is both A and D are
> connected to input and send to , then JACK server is able to
> activate A and D at the same time (since the both only depends of IN)
> and a multi-core machine will typically run A and D at the same time
> on 2 different cores. Then when A *and* D are finished, B can be
> activated... and so on.
>
> The input/output latency of a usual CoreAudio application running
> is: driver input latency + driver input latency offset + 2
> application buffer-size + driver output latency + driver output
> latency offset.
>this next part is the important bit i think...
> For a graph of JACK clients: driver input latency + driver input
> latency offset + 2 JACK Server buffer-size + ( one extra buffer-
> size ) + driver output latency + driver output latency offset.
>
> (Note : there is an additional one extra buffer-size latency on OSX,
> since the JACK server is running in the so called "asynchronous" mode
> [there is also a "synchronous" mode without this one extra buffer-size
> available, but it is less reliable on OSX and we choose to use the
> "asynchronous" mode by default.])
>
> Stephane
>boonier
-
boonier, thanks for posting this. I kind of get it?
I was always under the impression that each jack client had it's own set of buffers to fill and read from, thus increasing the latency. But it sounds kind of like Stephane is saying this isn't the case; they're sharing (how nice). It's more like how audio objects in Pd process audio in turn in the same buffer cycle, so with jack it's like each client is a Pd object. I think.