apparently this is pd?
sounds awesome whatever it is.
Aeropsia drain defocus
apparently this is pd?
sounds awesome whatever it is.
This is very good ambient, the mix and effects are of very high quality.
if it's pd (without samples ), I'd love to see the patch for learning!
I'd be scared if that isn't sample-based...
Lucider Improvised Funktronic Interface: Ubuntu Studio AMD64 10.04 -rt 2.6.33-4 // Phenom II 3.2x2, 2GB RAM, Asus mobo w/ hyper transport // Elo 15...
Hey that's mine!
I just found this thread, thanks for the kind words.
It's not 100% Pd, but the important stuff is (all the composition/synth algorithms and most of the subsequent processing and manipulation). I don't think I can even remember what all was involved, but there was a lot of multitracking and re-processing. No samples anyway.
Okay, now I am terrified... I was sure it that it was simply granular manipulation of ORCHESTRAL samples.
I'm interested in your methods of control. At one extreme would be making decisions constantly, the other would be rinsing the track to disk while making a sandwich. And you mention multitracking, so perhaps your control comes in the editorial stage? That is, your "direct aesthetic manipulation" as opposed to the decisions suggested to the computer through your brilliant patch.
What do you mean by an algorithmic wavetable? Like a wavetable data is permuted into a new wavetable, so on? And, for heaven's sake, since when does an accurate periodic waveform sound anything like a real instrument!?!?! What have you done?
Just to be an asshole, as is my want, re-processing and multitracking is inherently "sample" based. I imagine that this became necessary in terms of CPU performance and gaining more control over any finished product.
Again, congratulations, as so many others I was very moved by this work.
As an aside, no offense intended, it reminded me of 4 - 6 AM spent listening to hundreds of birds calling furiously while I (not the birds, I hope) was under the influence of LSD. This is a sincere complement if you can accept it.
Lucider Improvised Funktronic Interface: Ubuntu Studio AMD64 10.04 -rt 2.6.33-4 // Phenom II 3.2x2, 2GB RAM, Asus mobo w/ hyper transport // Elo 15...
There are definitely no orchestral samples, but I used some formant filter type things which might be giving you that impression. It's actually not anything very fancy, just approximately emulated "vocal" filters from the Roland VP-330 and some feedback delay network things (sort of higher order comb filters).
The composition was basically automatic (based on sequencing intervals rather than absolute pitches), with random variation added to a large number of parameters. I think it works well to record to multiple simultaneous files, which can then be arranged in a DAW, mixed and re-processed individually (through another Pd patch, or whatever). I'd rather take advantage of "studio as an instrument" rather than just "the patch is the composition". I can manually control how it progresses and how the different sounds fit together, bring in something else if there's a dull part... It's not terribly convenient but it at least compartmentalizes the different parts of the process. And I'm no purist either, if something just happens to need a reversed spring reverb crash...
As far as the "algorithmic wavetable" thing goes, I realized that harmonic additive and wavetable synthesis are a pain since they require large amounts of data entry. If you can automate it by, for example, generating a 1/f spectrum (i.e. sawtooth wave), eliminate a random selection of harmonics, multiply the result by some sort of scaled "filter" function, etc... while automatically varying the process in different ways, you can get an endless stream of "timbre frames" which can then be used for straight wavetable synthesis, waveshaping, phase distortion or whatever. I figure that with some enhancement, it can cover most of the "classic" territory, i.e. PPG Wave, Synclavier, etc. and a good deal more, with cheap polyphony if the wavetables are all global. I don't think that's responsible for anything sounding "orchestral", anyway. It's very synthetic.
@PonZo said:
Just to be an asshole, as is my want, re-processing and multitracking is inherently "sample" based.
...yeah, so is anything using a lookup table or delay, if you want to go down that road...
You make this sound simple but I have no idea where you learned how to do all of the stuff you're talking about. I guess I haven't learned very much about wavetable synthesis... or any of that other stuff. Filters, waveshapers, phase vocoders, and just about everything that involves the frequency domain confuses me, mostly because I don't understand the giant equations that are used to explain them. Maybe if I took a couple more years of calculus? Or maybe I'll just try reading dspguide.com again.
I do an excessive amount of reading: papers, dissertations, patents, schematics, etc...
Heheh, it was rude of me to even suggest that it was sample based when you made the samples yourself. Interesting idea to split and record several parts simultaneously - an excellent approach to gain some flexibility without having to re record and hope for the best.
I must admit though...I have never constructed wave tables in Pd. What are the basic building blocks of a wave table that allows one to "seek" through gradually changing waveforms? It seems that tabosc4~ would be cumbersome since there must be two that are phase aligned.
Lucider Improvised Funktronic Interface: Ubuntu Studio AMD64 10.04 -rt 2.6.33-4 // Phenom II 3.2x2, 2GB RAM, Asus mobo w/ hyper transport // Elo 15...
As far as splitting it into different files, I just figured that much of the better early electronic/computer music was multitracked and assembled after the recording process, either in a way that emphasized composition by a manual arrangement of different parts, or to make each individual component sound as good as possible on its own. Computational requirements aside, I'd rather be able to focus on one track at a time. Dealing with a single huge patch is too unwieldy.
Anyway the whole wavetable thing is 8 arrays (length 128 for 128 harmonics), 4 for spectrum and spectrum modifier things (filter curves and whatnot) and 4 for binary data used to select different regions to edit. There's a big mess of stuff to iterate through and modify the arrays according to counters or random numbers or whatever, it's usually easier to just start from scratch each time than figuring out what something is supposed to do. The final array is power-normalized and the phase information removed, and alternately written into two wavetable arrays using the sinesum message (this is too CPU intensive, is there a better way to do this?). It's read by two phasor~/tabread4~ things with a global signal to linearly crossfade them together.
But actually any wavetable-based thing can be used here, for waveshaping or formant stretching or whatever. It's cheap because the playback part is just reading from arrays, so you can get about as much polyphony/detuning as you want, the restriction being that the waveforms are all the same globally.
The advantage of the straight wavetable thing is that band limited multisamples can be generated simultaneously and selected according to playback pitch, so it can completely eliminate aliasing. Plus the lower octaves can synthesize the "image" frequencies that you'd find in vintage hardware like the PPG Wave or Prophet VS. This is, to me, extremely desirable territory, but somewhat difficult to simulate with standard waveform playback without either aliasing or considerable oversampling.
It still needs a good deal of work, and I don't know if it's possible to package it in a reasonable way that doesn't sacrifice flexibility. I can't think of a workable user interface. Anyway I can't say this is necessarily the best way to do things, but it has a nice generality. The ability to automatically generate an endless stream of timbres is useful.
Oops! Looks like something went wrong!