Now that it is possible to generate shader visuals with sound information in Pure Data (it was before with Gem, right?),
I was wondering how it can work the other way around. To generate sound or at least sound information with shaders and send it to Pure Data. There are some very interesting examples of how to use the GPU for sound generation. This one is Shadertoy again:
https://stackoverflow.com/questions/34859701/how-do-shadertoys-audio-shaders-work https://www.reddit.com/r/musicprogramming/comments/2cbd2s/shadertoy_has_added_glslsynthesized_audio/
This seems also interesting: https://www.fsynth.com/
Anyway, I read out one horizontal pixel line per frame and use the rgb information for generating MIDI notes.
I also use the mScale abstraction from @ingox for scaling the notes.
The player generates also MIDI notes from images, it is only silent if the whole scanned line is very dark or very bright .
My first approach was to generate audio from the pixel information like in the examples above, but the sound was always distorted (but I am sure it could work... ).
I am also interested in other methods to generate sound from visuals and visuals from sound.

GLSL_Video_Effects_V01_MusicPlayer.zip

Here is an example from the MusicPlayer: