-
Zygomorph
Hi all! I've been away for a while. Because I'm currently awaiting the delivery of an Organelle, I wanted to get my brain back into the Pd world.
Since about 2015 or so, my main performance instrument has been a Raspberry Pi 2b running this kernel: https://blog.emlid.com/raspberry-pi-real-time-kernel/
I'm interested in writing about my experiences here because in researching and preparing to use the Organelle, I ran into some newer Pi OS options that claim to have good realtime perf, etc. -- specifically Patchbox OS.
However, upon running it on my 2b, I immediately noticed that it was overall quite sluggish (just logging in and using the terminal via SSH was painful) and besides which, the audio produced was unusably distorted with my USB audio device (an ancient MAudio MobilePre) no matter what I set Jack/ALSA to use.
Contrast that with Wheezy running the PREEMPT kernel. I run quite heavy patches on multiple cores responding to accelerometer and gyrometer data coming in via a USB Wifi AP using a buffer size of 128 samples, 3 frames, at 44.1kHz seemingly effortlessly with no clicks or dropouts. Granular synths, FM synths, vocal processing with long delay lines, a few Freeverb instances.
Has anybody else used or compiled the kernel for more recent OS versions/Pi models and had similar luck? I'm interested in being able to compile kernels myself so I can see if the benefits carry over to newer Pi models and/or the Organelle (if necessary).
Maybe I'm just hamstrung from the start by using such an antiquated USB device with an antiquated Pi model, since the latencies are far better in any case when using a Hat-type interface, and I shouldn't worry about it so much.
Thoughts?
-
Zygomorph
@Zygomorph said:
HOWEVER. I do see now how this does not actually implement the original algorithm, so you can disregard all of this. I'll keep working on it and I'll let you know if I find anything different.
-
Zygomorph
@weightless Hi! I'm happy to explain my patch further, though it's probably way simpler than you might think
I looked at your patch, and I have to admin that I don't really have the time to figure out where everything is in it. Though maybe with your help, I can get you far enough along to implementing my method as an experiment. In particular, I see [tabsend~ $2-$3.phase1] at the bottom of PMop.pd, but I can't find the corresponding [tabreceive~] for these signals.
I do see [tabreceive~ $2-$3.phase1] but that doesn't correspond to the naming schema above. So I guess if you could show me where the feedback is actually taking place, that'd be very helpful.
It may be that my implementation is too simplified a use-case for what you're trying to do, but in looking at the algorithm described in the post we both referenced, as well my own intuitive understanding of what the DX7 is doing with its feedback'd operators, I figured that it's just a cosine modulating the phase of a cosine of the same frequency. So instead of trying to feed the signal of a cosine back into itself, with all its attendant blocking issues, I figured it was easy enough to just use another [cos~] being read by the same [phasor~]. That's what you should see in the first screenshot in my original post.
The other second inlet~ marked "phase" is a mix of the other operator outputs sent through by the matrix. As described by the second screenshot, it doesn't contain any of the "self" operator signal, i.e. the "naive" feedback approach that sounds awful.
HOWEVER. I do see now how this does not actually implement the original algorithm, so you can disregard all of this. I'll keep working on it and I'll let you know if I find anything different.
-
Zygomorph
Just an observation from patching for about 10 years now: if your system relies on common $0 values, the convenience of abstractions needs to be weighed with the inconvenience of having to pass it down as a parameter.
It might be useful, especially if you need to incorporate it into abstractions that already have a lot of established parameters (so that adding $0 to them would make things icky) to create a central source for $0 in your patch to which you could send a [bang( and receive $0 back.
-
Zygomorph
After reading the interesting and insightful information on this very old thread (https://forum.pdpatchrepo.info/topic/6185/feedback-fm-algorithm) I realized that perhaps I could achieve the desired effect through a little sleight-of-hand.
Just use a "clone" of your operator! In theory, this should work for an arbitrary wavetable.
Perhaps somebody could tell me if I'm missing something important, but aurally, the difference between the naive approach and this are night and day, an sound sufficiently DX7-y to me.
The other trick, if you're trying to design a matrix-style with abstractions, is to make sure that you keep the "original" operator output out of its input, and that the modulation index goes to your "clone". I've accomplished that with dollar variables, as above, (e.g. operator 1 mods operator 1, "[r op-$1-mod-$1]") and in my matrix abstraction:
Basic flow-control here, $1 is the modulator operator id and $2 is the carrier operator id. So if they're the same (feedback!) don't let the signal through.
Anyway, I hope this helps anybody. I'll get the full synth up when the interfaces are a bit more usable, I'm pleased with how simple it's turning out to be.
-
Zygomorph
THE GOAL:
I want my Raspberry Pi 2 to automatically start up the Jack server with realtime scheduling, and subsequently start Pure Data with realtime scheduling, load a patch &c. without any user intervention from a login shell.
As a performance artist working primarily with psychodrama (the technology is definitely NOT the important part here), fiddling around at a terminal right before or during a performance is kind of... psychically inconvenient. I need a box that I can plug in, give the audio output to the sound guy, and be ready to go.
PREREQUISITES:
I use Raspbian with a Linux kernel compiled with realtime goodness. I have hand-compiled Jack2 and Pure Data with realtime support in order to take advantage of this. Running a process with realtime priority requires the proper PAM directives set in /etc/security/limits.conf and related places, but that is beyond the scope of this little write-up.
Also somewhat relevant: I use a M-Audio MobilePre USB soundcard (sounds pretty awful by today's standards, but it's an extremely USEFUL box and sounds good enough for the work I do). For full-duplex sound, this requires the RasPi's USB to be set to single speed. In this configuration, I can get just under 2.9ms latency with good CPU overhead for Pure Data to run a few of my 64-voice wavetable and delay line granulators. Yeah!
THE PROBLEM:
Purely by happenstance, I had given the jackd command in my startup script the option “-s” which allows the server to ignore overruns and so on. So things seemed to be working as expected, but I noticed a lot more glitches than when I manually started up Jack and Pd from the terminal without the “-s” option. Upon removing it from my startup script, everything failed! WAH.
So I started piping STDERR and STDOUT to text files so I could read what either Jack or Pd were complaining about. As it turns out, Jack was unable to start with realtime priority due to a permissions problem. (I assume this is one of the things the “-s” options allows jackd to ignore, and thus start up with non-realtime priority. Problem is that Pure Data can’t connect to a non-realtime Jack server when its “-rt” option specified.)
Now, I had already been through the whole rigamarole of setting proper memory and priority limits for the “audio” group, to which the user “pi” belongs. So I thought, okay, I have to execute these commands as “pi”, and while simulating a login shell because the security limits in question are only set during login.
So I did this:
su -l pi -c "/usr/local/bin/jackd -R -dalsa -dhw:1,0 -p128 -n3 -r44100 -S >> /home/pi/jackd.log 2>&1 &"
This says “login as user ‘pi’ and then run the jackd command with these options, piping the outputs to this log file and run it in the background”. Well, I still got all the same errors about not being able to set realtime priority. WHYYYYYYYYY?
THE SOLUTION:
I hunted and hunted and hunted on a Very Popular Search Engine til I decided to try searching “security limits not loaded with su -l” and found this.
(Makes me think of that Talking Heads lyric, “Isn’t it weird / Looks too obscure to me”.)
So by uncommenting the line
# session required pam_limits.so
in/etc/pam.d/su
everything started working as expected.CONCLUSION:
I now know a LOT MORE about PAM and how important it is to keep in mind when and in what order scripts and other little subsystems are executed; but also that sometimes the problem is EXTREMELY OBSCURE and is to be found in some seemingly far-flung config file.
I hope this helps anybody out there working with Pure Data and the RasPi. The second generation board really packs quite a punch and can run several hundred audio grains (run by vline~ and enveloped by vline~ and cos~) simultaneously without a problem. And I'm pretty sure this is just using ONE of the 4 cores!
I'm by no means an expert Linux sysadmin, so if you have any other suggestions or corrections, please let me know! I wouldn't have been able to get this far without all the generous and helpful writeups everybody else has contributed, both within the RasPi and Pure Data communities. If you have any questions about anything I glossed over here, I'll do my best to answer them.
-
Zygomorph
Oh, I just came across the ArduinoFFT and ArduinoFHT libraries, and they have a Pd patch example for a spectrum display! But my question still remains... Maybe I should just create a bank of osc~ of the corresponding bin frequencies?
-
Zygomorph
Perhaps this post would do better also in a more general forum, but I have a project coming up where I'd like to use Digispark Arduinos to analyze audio signals then send the FFT info wirelessly to Pure Data. I haven't gotten the equipment to futz with yet, but my question is this:
Is it in principle possible to taken a serial stream of data representing FFT bin amplitudes, and then get them into rifft~ for resynthesis? What's involved here? Some sort of status/frame bit, maybe? Some other object maybe an external other than rifft~? Maybe somebody's already done this?
Thanks so much!
-
Zygomorph
On Pd-extended 0.43.4, both on Snow Leopard (10.6.8) and the same system (late 2008 15" MacBook Pro) upgraded to Mavericks (10.9), I can no longer click and drag objects! I can still click and drag to select objects, but I must use Shift+arrows to move them.
Thoughts? I just tried an external mouse, as well as turning of tapping and multi-gestures on my trackpad. No dice.
-
Zygomorph
Hi all! I'm a long-time Pd user, but now I'm not-so-gingerly dipping my toe into the lunacy that is GEM.
I've attached what I hope is an exciting and useful patch. Through a lot of trial-and-error, I was able to figure out how to make an arbitrarily-sized grid of circles using a few of ClaudiusMaximus's recursion techniques. I still don't exactly understand what's going on, though, which is one of my main complaints about GEM... it's so opaque as a dataflow language!
Anyway, because I wasn't able to get Gridflow to work on my system, I went about trying to color this grid of geos according to a live video input without it. Miracle of miracles, it somehow works. But, thinking that since I was able to manipulate the color of individual geos, I should be able to do the same thing to their translations and rotations using the same gem chain.
But no. When I tried manipulating the Y-axis rotation of the individual circles (to signify brightness corresponding to the grey output of [pix_data]... viz. skinny side toward the viewer at 0 and face-on at 1) I could not for the life of me get it to make any recognizable sense as a raster. It's detuned, so to speak. I tried throwing all imaginable combinations of [separator] and [t a a] at it at various points, to no avail.
On the upside, the recursive results of such manipulations are incredibly interesting. The patch annotations gives you a few ideas to start.
I imagine somebody with more experience, or who understands OpenGL as a scripted language, might be able to shed some light on this question? Please?
Also, can somebody please explain to me why it is more useful for the x-y-position for [pix_data] to be scaled from 0 to 1 instead of the actual pixel dimensions of the image? I can imagine some reasons why, but it would be much simpler in in many cases to work in pixel addresses, instead of arbitrary floats...
Note: this patch makes use of [nrepeat] which seems to exist only as an abstraction somewhere in Claudius' recursion tutorial.