-
stevegordon
posted in technical issues • read moreHi everyone,
I’m trying to build patches that recreate real-world sounds from recordings — things like a single pipe-organ note in a specific room, or a stone hitting water. I’m not trying to sample or play the audio. I want to synthesize it as closely as I can.
I’ve read a lot of Pd docs, but I’m still not sure how to approach sounds that have many layers. Do I break things down into partials? Use convolution? Or look into physical modelling? I work with data tools in my day job (mostly observability stuff like Sifflet), so I’m used to structured analysis — but sound still feels tricky to break apart in a useful way.
If anyone has advice, examples, or even simple guidance on how to analyse the source audio before building the patch, I’d really appreciate it.
Thanks.
-
stevegordon
posted in Off topic • read moreWe’re not quite there yet. Current AI can analyze and approximate the timbre or spectrum of a sound, even generate similar tones using models like DDSP or neural synthesis, but building a Pure Data patch that perfectly reproduces it from scratch is another level. It would need deep understanding of the physics, acoustics, and synthesis structure behind the sound. We’re getting closer, though — tools like Riffusion, Magenta, and even Sifflet-style observability for audio pipelines show that AI is starting to “understand” signals more than just imitate them.