i guess that this is a silly question, but is there any other way to add multiple control sources (for example 5 sliders) together but adding them with multiple [+] objects?
i mean something more cpu wise? however, it should accept floats.
-
Merging multiple control signals
-
ok, that question was crap. i ran into problems building a 10 voice synth. imagine you have 10 osc's (each voice an osc) and an lfo (wich also consists of 10 osc's / one for each voice, with a phase reset triggerd by note-on-events of each voice). i used [pack~] for the lfo voices to pack them into a 64 float list each dsp cycle. after this i used [unroute] to pack it into a signal stream of 10 adressed lists of 64 floats each dsp cycle. i use [route] to unpack them again and to route each lfo voice to its corresponding osc voice to modulate something (for example amplitude).
now imagine i have a second lfo, also containing 10 different phased voices, that i want to route to the same 10 voice osc. how can i add the 10 packed (and adressed) lists of 64 floats each cycle to those of the first lfo? -
hang on...you're trying to send 64 control signals every block, to run at audio rate? That's very inefficient, and you'll be better off just creating an audio rate LFO
[phasor~]
|
[-~ 0.5]
|
[abs~]
|
[*~ 2]makes a simple triangle wave LFO.
then, simply retrigger the phase of each voice's LFO when a noteon velocity is received by that voice.
I assume all the LFO's have the same speed, yeah? in that case, you won't have to worry about routing, as you can just send a global LFO speed variable.
adding extra LFO's or modulations is easy then, too.
I think you might be misunderstanding how 'control signals' are generally used when building synths. Basically, you have the synth engine, which is run at audio rate, and then you have the interface - virtual, physical or a combination of both - which is clocked at control rate - traditionally this was about 50hz (20ms). So, if i change the speed of my LFO by turning a knob or sending automation from a DAW, then that data will only go through every pulse of the control clock, ie, every 20ms or so.
However, the actual LFO is run as part of the audio engine, and is updated every sample, to make a smooth wave.I'm assuming that modern synths have more headroom for control signal updates than the older ones, so I send my control signals at 200hz (5ms), which is pretty much faster than yoU'd ever be able to hear (and control signals aren't meant to do audio rate modulations)
however, this still only means one update every 4 blocks of samples or so.Anything that needs to be updated more frequently than that, and i would just use audio rate objects.
-
yeah, the reason, why i use this combination of [pack~] / [unroute] to convert audio into the message domain is simply to avoid "wire harps", because [unroute] lets me make one bus from the ten sources (lfo voices), wich i can easily send with [send] objects, and so on... yeah, i didn't think about the disadvantages of this technic before. but i could use multiple sends (one for each modulation slot and modulation target). the corresponding [receive] objects (for example [r osc1-pm-slot1], [r osc1-pm-slot2]) get [unroute]d and [unpack~]ed inside the target patch, where they can easily be summed up in the signal domain. hope this works like i want it to....