-
acreil
just finished this
I was initially trying to emulate the obscure "CSM" mode of some Yamaha FM ICs, i.e.
CSM was meant to be used for speech synthesis, but in practice was almost never used. I only have a very rough idea of how it's supposed to work, though. Things kind of just went all stupid from there... -
acreil
excerpt:
complete composition: http://acreil.bandcamp.com/album/hg-iterated-glass-entering-3e2
-
acreil
http://acreil.bandcamp.com/album/flutter-straight-on-the-nine-miles
Excerpt:
This one is all harmonics of 30 Hz.
-
acreil
This has already been posted in the output~ section, but I think it should go here too...
http://acreil.wordpress.com/2013/04/20/algorithmic-composition-of-acid-house-with-lo-fi-timbres/
-
acreil
I uploaded some abstractions that I used in this piece:
The first set (dphasor1, dphasor1~, dphasor2, dphasor2~) reduce aliasing by constraining phasor~ periods to integer numbers of samples, then correct the resultant pitch error by either slowly dithering between the two nearest frequencies, or applying noise modulation. An ordinary phasor~, by comparison, essentially does periodic, audio rate modulation between the two nearest frequencies, generating sidebands that are audible as aliasing. The advantage of my approach is that all aliasing falls back onto the harmonic series (in the dphasor1~ case) or is spread out into broadband noise (in the dphasor2~ case). With oversampling, non-band limited synthesis algorithms can be implemented with very high quality. And at low sample rates the artifacts generally aren't too horrible.
The second set (pdist~, pdistx2~, pdistx4~) implement a phase distortion algorithm that smoothly interpolates between a sinusoid and an arbitrary waveform, or between multiple arbitrary waveforms.
Also included are a number of other "helper" abstractions.
The (carelessly thrown together) example patches are _dphasor_ex_1.pd, etc.
As far as I know, these are novel algorithms. I wouldn't be surprised to see that they'd been published or patented already, but it was quicker for me to implement them than to dig through a bunch of literature.
I'm not making any promises about these being well-documented or useful. At this point they've only been made for my own personal use, so they mostly just conform to my own arbitrary and nonsensical conventions. I plan to eventually write better descriptions that discuss theory, etc. For now I just want to make it available.
-
acreil
I'm trying to think of a way to use Pd in a non-realtime, batch process sort of fashion.
The idea is that I could pass file names and parameters via command line (or whatever), and Pd would process the data accordingly and write the results to a file.
I realize Pd probably isn't the best tool for the job, but it might make some things simpler. I'm trying to do some stuff for work that requires both batch processing and efficient real time processing. It would be nice if both could be done with the same software. The alternative is to use Pd for real time stuff and Octave for batch stuff. Octave can be used for some real time stuff, but it's pretty slow.
Any suggestions?
-
acreil
I'm trying to do this "sound mass" type thing involving a large number of independent events triggered at random intervals. The problem is that for very high event density, it starts to have an audibly crappy sound (like bitcrushed noise) due to messages being sent at intervals of 64 samples.
I'd thought they were sent between audio blocks. Setting block size to larger than 64 samples does change the spacing between messages, but smaller block sizes don't seem to make a difference. The intervals don't get smaller than 64 samples. Is this just the way Pd schedules things?
Is there any way to send messages on a per sample basis (aside from maybe 64x oversampling)? I don't care about efficiency since I'm rendering this to a file.
I'm using the [delay] object to send "set" messages to biquads, if that makes any difference. Would something like [t3_delay] be a better idea?
-
acreil
Miller wrote [vcf~] with two outlets for real and imaginary components because he seems to like complex numbers. He realized later that this is equivalent to bandpass/lowpass, but only mentioned it informally on the PD list. I don't think it's officially documented anywhere.
And any resonance will attenuate low frequencies. This is true of any standard lowpass filter, if it's not specifically designed for constant passband gain. I think minimum resonance should be q = 0.707, which shouldn't attenuate low frequencies. But I haven't tried this for myself.
-
acreil
If you use "-bytes 4", writesf~ records 32 bit floating point. The valid range by convention is still -1 to 1, but you can normalize it later. That way you don't have to worry about levels when recording.
-
acreil
If you just want a 2 pole lowpass, you can simply use the second outlet of [vcf~].
You can make a 4 pole resonant lowpass (like a Moog filter) using 4 [lop~] filters in series inside a feedback loop, but this has to be in a subpatch with a block size of 1, and some other stuff needs to be added to make the resonance behave correctly, add nonlinearities, etc. It's too expensive, so it's better to use the [moog~] or [muug~] externals.
-
acreil
I think the most efficient way is to use ifft to write to a table, then do some other stuff to automatically copy the first few points to the end of the table (guard points for interpolation, as sinesum does). I think this could go in a subpatch that's triggered by a bang and turns itself off after the table is written.
You could also do a polar to rectangular conversion to generate real and imaginary harmonic coefficients, use both sinesum and cosinesum, then add the resulting tables. But that seems rather clumsy. And I think the first coefficient for cosinesum is the DC component, so you'd have to prepend a 0.
I made a real time additive synthesis engine at one point that used sinesum, and later changed it to ifft. I think ifft is much faster, though it may be less convenient to use a signal input rather than a message.