hi there, just find out this web thing by sebpiq on the mailing list:
this might be a neat feature to add to the forum, isn't it? i'm a big fan of ascii diagrams for patches but also pasting raw pd code and seeing it turned into graphic svg stuff would be really cool.
questions: does anyone care about this besides me? is it doable at all? who has the privileges here to do some hacking in the punbb engine?
it's happening more and more frequently to see people asking for generic help, like can anyone help me? in my brain pd is just an ensemble of squares and lines.
i think it's time to put a sticky post in the technical issues section, or maybe in the root of the forum itself, with a list of updated resources useful to learn pd, gem and other pd-related things. i know this stuff is already present somewhere in the forum but having it all in the same place would be nice for beginners and save us some cut and paste job.
we could start by collecting down here all the stuff we have and then try to put a list together. just a thought.
this might be slightly off topic but I need help with some simple sound-related math. I've been trying to do something as trivial as plotting the function of a wave modulated by another one.
At the moment i'm using this function:
 y = cos(Fc * x + M * cos(Fm * x))
Function plot on Wolfram Alpha
Where Fm is the frequency of the modulator, Fc frequency of the carrier and M the modulation index. In this way I can draw two sine waves modulating each other and - surprisingly enough - this also works when modulating the frequency of other waveshapes, like a [x - floor(x)] sawtooth:
 y = [Fc * x + M * cos(Fm * x)] - floor(Fc * x + M * cos(Fm * x))
Function plot on Wolfram Alpha
BUT I can't manage to do the opposite! If I want a sinewave to be frequency modulated by a sawtooth wave it just doesn't work. I see the phase going all crazy but no frequency modulation at all.
 y = cos(Fc * x + M * [(Fm * x) - floor(Fm * x)])
Function plot on Wolfram Alpha
Am I taking a completely wrong approach? What elementary mathematical principle am I missing here? Thanks!
if you look at the nicknames here in the forum and on the pd-list, you will notice the presence of some active italian pd users but most of the time it turns out they're studying or researching abroad. there is also a website for the italian community but the forum looks abandoned and the site itself is lacking of updated contents.
so, in case you are an italian pd user, where are you from exactly and what do you know about the current italian pd scene?
all the best,
i'm happy to share with you my first attempt with a depth of field effect on gem. it's still very beta but it's working fine and i'm quite excited by the results. feedback is welcome as always.
note: i'll be releasing the slightly modified glsl code and the pd patch as soon as I get the permission from the author of the bokeh shader.
I recently came to the conclusion that I should learn something more about signal processing and started reading the first pages of both Andy Farnell tutorials and Synth Secrets by Gordon Reid.
The basics of Amplitude Modulation seem clear to me. When multipling an oscillator signal (the carrier) for another one (the modulator), I get a new waveform which includes three frequencies: the carrier, the sum (carrier + modulator) and the difference (carrier - modulator).
It works fine, but something goes wrong when trying to make a reverse test. I am expecting that if I sum up three oscillators (carrier frequency, sum frequency, difference frequency) the resulting wave should sound like the one I get from the amplitude modulation, but it doesn't. It is the same (and the waveform correspond as well) only if I mute the carrier frequency. Why is this happening?
As a total beginner I'm pretty sure I'm missing some simple logic here. Attached is a test patch.
Thanks in advance for your patience.
it's been a long time since i started wondering about getting some advanced visual effects out of pd. I know "advanced visuals" could mean a lot of different things, but let's say I am thinking of pixel stuff like depth of field, bloom, glow, blurring. I kind of tried everything, from basic pix effects to freeframe fx and gridflow convolutions but no matter what I do, since these effects are cpu based the resulting patch is always dead slow.
My first question is: as far as i know pd is born as an audio software, does it make sense to keep pushing it into the domains of visuals?
Don't get me wrong, I love pd and I know the amazing stuff you could get out gem and gridflow. Let's think of all these kind of 3d manipulations, sound visualization, video mixing, opencv stuff, pmpd physics simulation, just to name a few. You could just get some wonderful visuals by only using geos and simple texturing. But, sometimes, I find myself in front of limitations, like the ones about pixel effects I said before, and I wonder if I should just leave pd to what it's good for and move to video driven software like vvvv or "classic" programming environment like Processing.
I know a lot of stuff I've been talking about could be achieved with an irrelevant cpu cost by leaving calculations to the gpu. I think GLSL potential is extremely huge and I got to work some basic blurring, glowing and blooming effects I found on the web, but still seems a little workaroundy for me (especially multipass rendering).
Here is the second question: could opengl and glsl scripting be the solution to my first question? and what do you guys think about having a place where we can host a (hopefully growing) collection of ready to use GLSL effects along with example patches? maybe with a standard framework of objects for multi texture effects and general GLSL handling?
Ok, that's all. Any feedback will be extremely appreciated.
Here follows a simple GLSL blooming effect applied to gem particles (works on macosx 10.5, pd extended, gem 0.92.3)
I am working on a quite complex step sequencer that triggers both sounds and visual effects. Since I want my live sessions to be available for further editing I am used to record the audio with [writesf~].
Sequencing the audio works fine at first, but right when the video kicks in, it kind of slows down bringing along some little crackling noises. Here is the weird thing: even if the live session is distorted and slowed down, the audio file I record is perfect.
This confuses me a lot. How can it record data to file in such a perfect shape when the live audio is so currupted? It doesn't seem to be related to cpu usage, am I missing something?
It took me hours to get TouchOSC working with Pure Data on a wireless local network (no router). Now that it finally works, I find myself in troubles with latency. I can send data in real time with perfect sync from PD to TouchOSC, but it goes crazy with delay everytime I try to adjust a slider or tap a button on the TouchOSC interface.
The weird thing is that it seems to work fine at first but then, as I keep playing with the interface, the lag becomes more and more annoying.
I don't know what to do to solve this. Any advice will be greatly appreciated!
TouchOSC is running on a iPhone3GS + iOS 4.0, Pure Data is on a MacBook Pro + MacOSX 10.5.8.
I was just wondering how come that I can apply any sort of crazy effects in real-time to a video source (both file on hard-disk or camera), but I couldn't even put some glow or blur to a 2D circle or anything.
This is an important limitation, isn't it?
I've heard GLSL support could open new directions, but even if I did some research I have no idea of what GLSL is or how does it work.
Is there a way to apply pix effects (freeframe included) to a shape instead of a pix_video, pix_film or pix_image?
I understand some people plays with pix_snap2tex and gemframebuffer, but I need an example to figure the thing out in the best way.
Thank you in advance.