Hello.
I have a few 8 minute 16bit stereo wav files I want to load into pd. So I set my read flags to [read -maxsize 3e+07 -resize ...blah blah blah( and everything loads dandy, works fine.
Only, in total there are 467MB of sample data to load into PD allocated memory. My only concern is about being able to play this abstraction safely (crash-free in a live environment) from my netbook (of 2 gig RAM) and limited CPU.
On the big and beastly mac Activity Monitor shows pd to to have eaten up about 900MB RAM with around 1.2GB in virtual memory. This goes up and down according to how much I actually load in.
So my worries are, why the 900MB of RAM use? is that because its upsampling the 16bit into internal 24 or 32bit processing? And the Virtual memory, that's only requested by PD as a precaution isn't it? And provided there is sufficient RAM, no disk swapping will take place, will it?
Performance is lovely. I'm just worried about pushing the -maxsize so far.
On a side note, think about how much having this sampling power would have cost only 10/20 years ago!! (or the size the computer would have had to have been... netbook! pah!)