I am working on a project which be looking to use a video camera to observe the movement of fireflies at dusk, and use this data to trigger audio in real-time.
IE: position of firefly light on X and Y axis (or within a particular area of a grid imposed on the video) could trigger a specific note or sample, brightness of light could affect volume, movement on X and Y axes could cause a glissando.
I am feeling good about using the data once it's actually in PD, but I could really benefit from some suggestions for good ways to get started in order to read the data from a live camera feed. I figure there's a good chance that something very similar to this has been done before, but can't find it in the forum. Any ideas will be gratefully received, thank you!
Reading data from a video camera
@Dizzy-Dizzy Easiest if GEM is working for Pd for your system........ https://curiousart.org/digital_proj/pd_eBook.pdf .....see Chapter 5.
Maybe can be done with Ofelia externals for Pd.... which is worth learning how to use if you have the time..... but I think not...
Otherwise http://reactivision.sourceforge.net/ .... see "finger tracking"..... might get you position data that it will send as OSC messages that can be received in Pd...
Last semester, I was giving a class demo about video camera input in Pd.
For a single blob, pix_blob does reasonably well (though, as it's a single weighted average, everything tends toward the center).
"... movement of fireflies" implies tracking multiple points.
Gem has pix_multiblob, but on my system, this was unacceptably slow. YMMV depending on hardware, image resolution etc., but in my case, it simply wasn't usable.
So instead, for the demo, I found a Processing sketch online for multiblob tracking, and added a function to send Open Sound Control messages to Pd. Worked a treat.
Ofelia is likely to be more performant than Gem, but it's much harder to use.