what about tabread4- and tabread4-?
i ve troubles when i fill an array with long samples (more than 1 minute)
when i play it i get a decent sound for 10second. after the sound sounds more & more like a bad clocked digital device (that specific bell sound ...) . it is not dependant of the time but of the table : if i put an offset , and start to play say at 10second in the file , the sound get bad immedatly (i mean it's not a vector or vector size problems, sfplay works perfectly...)
i v'ent found anything revelant about that on the pdmailing list
anyone have an idea ?
-
Tabread object troubles
-
huh ?
really strange i have never have this kind of problem, but maybe i never load such big wavfile into a table.
i'll have a try and feedback you about it.
first thing, i could not fill more than 4 000 000 samples into a wavetable:
error: soundfiler_read: truncated to 4000000 elements
but it is enough for havin something like 90 s. of sound in 44.1 khz -
i have a try with this: http://nekodata.free.fr/seekit.pd
i didn'nt notice an problem, but maybe ther is a bug with the osx version of tabread.
Maybe there is an issue with the xsample~ objects : http://www.parasitaere-kapazitaeten.net/ext/xsample/ -
well....
i use a mac at work , only for the net.
at home iv got a PD,37test10 on windows 2000 with iemlib & zexy & maxlib
nothing to do with osX!
i had never heard that problem before cause i was working on looping mono sampler...
anyway , i m trying to use PD under linux as well , but as you can imagine , i ve got troubles using my card on linux .... but i m on the way ....
(everything is ok on mandrake but no sound , lspci , modprobe hdsp.... ok , but no sound at all .... (alsa power...)
anyway that's another topic. i m investigating it on the pd mailing list and alsa list
hope i ll be sucessfull....
huo i looked at your patch (using a text editor , im at work) , and i see i forgot to say that i use it with stereosound. like
read -resize $1 array1 array2
perhaps that's the cause because i use stereo samples for few times...
and i ve never heard that before , with mono samples (but i didn't used long samples in mono ..
edit : (discovered anti spam protection .... funny)
by the way , is there a way to get around the max soundfiler size , using maxsize?
if i don't fiind a solution at my problem i ll havr to do a buffer system with short tables , but that ideo doesn't sound really good for me ( more cpu usage for nothing -
_by the way , is there a way to get around the max soundfiler size , using maxsize? _
wow! never noticed this options, yes it permit open huge as you want wavefiles! thanks a lot
i have a try in stereo with 5 min. stereo sample, more i approach the end , more ther is this strange bell artefacts... i could play 1 or 2 minute without hear some, more it plays at a far position from the start of the file more the sound have degradation.... damn it, i keep on search solutions. -
ok so that's not my config .
troubles there!
need help, ive tried a buffer solution . it 's working but it's a hog for cpu.
i mean i load the sample into a table and divide that table between others of 10sec max.... it' working fine but it's stupid cause useless. i shouldn't have to do that....
by the way , if we could just define a sort of memory access it will be marvelous.
something like malloc in C. anyone have heard about such an object???
i' ve posted on the mailing list and i m waiting an hypotetical solution .... -
here a list of my test:
with the xsample~ object: same problem...
with the statwav~ from creb extern : same shit....
i start to think there is a deep problem with puredata and huge table... -
yeah
i ve spent so much time on pure to admit it's not so good... at least in comp of max .(bouuuuuuh)
and what i cant understand is the fact that there is no probl with sfplay- , but ive tried to read the table with tabread - and tabread (data)
and ive analized a block (64 samples ) and compare them . same values!
so wha ?
we need opinion from mac & linux users... -
My guess is that the problem is numerical rounding errors in the phasor~ and *~ :
5 * 60 * 44100 = 13230000 (the number of samples in a 5min sample)
2^23 = 16777216 (bigger than this number the gaps between Pd floats are bigger than 1)
Working close to the limit the quantisation of float values leads to inaccuracies.
Pd floats are not good enough to exactly represent large integers like sample indexes. -
ok
sounds logic...
so
is there a way to go around?
how can i use this soft if it's not able to play from memory...?
as far as ive understood , phasor-go from 0 to 1 so the problem arise only from audiosignal operation when INT (a) != FLOAT (a) .according to the fact that 'a' is an integer value ( an index)
ive tried with an object i- ive made with a wrap- and -- object. but the error come from the wrap object now ... have to found something else.
i can't imagine it can't be solved... at least in a futur pd version or with a patch.
i ven't found any topic about that anywhere and especially not on the mailing list...
need help (3 month of hard duty to make a big looping sampler (like loopit but bigger) ) and not reliable because of a soft bug , not my fault .
i ' m all white ... (mean i look like a dead man . )
hell et rehell
faith -
well
me again
i'm talking about that on another forum too and i guys said (and i think he's right)
16777216/44100/60= 6,3405955 minutes.
but i ve got that problem from 25 sec on tabread4-
and 10 sec from tabread-
so what's wrong?
i mean 6 minutes should be acceptable for my patch. -
so what's wrong?
Have you tried using [print] or [print~] to see exactly what values the phasor is sending to the [tabread(4)~] (after scaling to the length of the array)?
How I would try to solve this would be using a [phasor~] with frequency samplerate/blocksize scaled so the out put is { 0, 1, 2, 3, ..., 60, 61, 62, 63 } each block, and a bang~ to increment a block counter by 1 after each block. Then add the [phasor~] output to blocksize*blockcounter.
Blocksize is default 64, I think there is an object you can get it from but I can't remember the name, you can get the sample rate if you bang [samplerate~] if I remember correctly.
Not sure how to adapt it to loop lengths other than a multiple of blocksize, but I'm sure you could do something with the phase input of the [phasor~] and/or an [expr~] (or variant).
EDIT: this doesn't defeat the 6.34 minute limit, just lets you get closer to it while maintaining sample level accuracy. -
thx claudius for feedback
i don't know much about block size, i had strange behaviour changing values, i m not sure how it works.
i ve made a patch to check a difference between tabread and tabreadaudio.
values are equal during all the files (i ve made a table to collect the difference , whatever t , f(t)=0 . so values are the same everytime. but ive included a print- object , and have a look a the terminal , values are rounded ! i lose 1 decimal .
i dont know . perhaps it is just a print rounding...
in another hand , ive tried a way to split a loaded file into a collection of samples , copying data between tables. i ve to do that in control and not in audio , if i do that in audio , i lose the benefits. so it's not reliable cause i need roughly 5 sec to copy one second , and we are talking about huge samples, that's not a solution.
and i ve to admit i dont understand the 2^23 limit , from my part , ive always heard about 32 bit internal resolution , hence 2^31 .
a simple division , and i see that with a 2^31 mean no artifacts before ....27 hours.... should be acceptable huh?
miller said on the mailing list to use tablay with a delay line to feed it , that's working much better than tabread for long files.
it sound's like a way to admit that there is a really important problem with float definition ....
see ya -
the 2^23 limit , from my part , ive always heard about 32 bit internal resolution , hence 2^31
A floating point number is a binary way of writing numbers like -1.462e24 or 6.444e-10 . Out of the 32 bits, 1 bit is the sign of the number, and 8 bits are for the number after the "e". That leaves 23 bits for the actual digits of the number. You only get a certain number of significant digits of precision, which means the absolute difference between adjacent floats increases as you increase the size of the float.
Say you had three decimal digits of precision. Then you could represent 1.00 - 9.99 in step 0.01. 10.0 - 99.9 in 0.1's, 100 - 999 in 1's, but after that the difference between successive numbers that you can represent is 10, so you have 1000, 1010, 1020, ... , 9980, 9990, 10000, then 100's: 10100, 10200 and the gap increases to 1000 at the next power of 10. Going the other way with smaller numbers, the absolute gap between floats gets smaller, but the relative (percentage) gap between floats stays roughly the same over the whole range of floats.
If Pd had a 32bit integer type, it could represent time up to 27 hours, if it had a double (64 bit) float type it could go even further, and if it had a 64bit integer type it could go further still. And really you want an integer type for time, because of the discrete time of sampling - then when it overflows you get an obvious error, rather than these non-failing distortions you get with float quantisation (quantisation = loss of significant digits of precision).
I think Gridflow has more types than Pd, that could be worth looking into. -
I didnt follow all the discussions here so far, but why not use readsf~ and writesf~ instead of tabread~?
With them u have no clicks and stuff..
If u just wnat to loop soundfiles and send them throght effects etc. Thats the best way I think.. -
ican't use them.
i need to do stretching stuff... so i need to control the index of the table, hence the speed.
not just the starting point
but effectively , at this time , i can manage to work with tabplay for fx , but not for the sampler. (how to pitch ! )