-
ddw_music
@Obineg said:
90% of the humans are not even able to give answers which sound correct.
I'm not sure which is worse: an idiot who is obviously wrong, or an idiot who sounds thoroughly convincing.
I don't have the link handy, but one commentator called GPT a "mansplaining machine" -- often wrong, but projecting absolute confidence in its answers.
Social media haven't helped, because every damn moron's opinion gets preserved in the digital record instead of being forgotten by the time you get home from the pub. So we get used to stupidity acquiring the air of legitimacy that comes simply from appearing in print.
-
ddw_music
@Obineg said:
chatGPT is more intelligent than 90% of the world population
It's not. It makes up answers that sound correct, but not based on real understanding. This may change at some point but it's not there yet.
If humans can't tell the difference, that's humans' fault.
hjh
-
ddw_music
@KMETE said:
I was wondering if there is any difference between multiple sf-play2~ or readsf~ in term of memory usage?
[stereofile] loads the contents into RAM. (sf-play2~ is only the player, not the loader, but it depends on audio data loaded into RAM by stereofile.)
readsf~ streams from disk so its RAM usage should be much smaller.
One of the reasons why I worked on stereofile and sf-play2~ is that I think if you tell it to play an audio file at rate = 1, then it should sound like the file's normal speed at any Pd sample rate. Pd's built-in [tabplay~] does not do this -- if the system sample rate differs from the file's rate, it will play faster or slower than normal. That is, system settings affect the patch's behavior
. I think [readsf~] also has this problem but I haven't checked, could be wrong about that.
hjh
-
ddw_music
@KMETE said:
what is the best method of playing multiple files without the need to multiple everything?
sf-play2~ accepts a symbol in the right-hand inlet, to change the buffer being played. (That is, this -- "With [sf-play2~] I think you will have to use multiple players as you did here" -- is incorrect. I'd be a terrible programmer if I made the abstraction so inflexible.)
hjh
-
ddw_music
I think this handles the left-hand overlap problem.
I did not insert a note-off for rearticulated notes, so this patch is incomplete. Also the right-hand time-window idea is not implemented here... but it's all the time I have right now.
Like a lot of software problems, it seems like a lot of the difficulty is in defining the requirements.
jameslo is 2000% correct about this. Often the working method is "let me throw some objects into the window and hope some of it sticks." The usual result of that is confusion.
hjh
-
ddw_music
@Alan-Angel said:
- I don't plan on play two left hand notes, I just want to play fast galloping basslines like dun du-du dun du-du dun du-du dun and it'seasier to do with one hand and play the notes with the other, like a real bass.
OK, so, hold a pitch with the right hand and play fast notes with the left -- rhythm and articulation under control of the left hand. So that removes some cases.
However... user-interaction data are always dirty. You say "I don't plan on..." but in practice, when you're playing, the actual MIDI messages coming in will not always conform to the ideal. So if your patch is like "it works as long as I never X"... trust me... sometimes X will happen... the patch needs to be able to handle it.
E.g.:
I think the following two scenarios are guaranteed to happen sometimes:
-
You hit the right and left hand notes "at the same time" but the left-hand note arrives first. If the logic here is "nothing is currently held in the right hand, so, do nothing," then the first note will become a rest. So there should probably be some logic such as "if the right-hand note arrives within a certain time window after the left-hand note (like 20-30 ms or less), then play it once both are received."
-
While rolling the fingers over G-A-B in the example, some of them overlap. If you're using left-hand note-off to release notes, and you don't account for overlaps, then some notes will be cut off prematurely.
For problem 2, assuming mono, you would release a playing note if the note-off matches the last note to be played:
- note on 43
- note on 45 -- then, quickly, note-off 43
- note on 47 -- then, quickly, note-off 45
- note off 47
At 2, note-off 43 should not cut the note off because it isn't the currently active trigger note number (but, since note-on 45 is rearticulating, then it will have to generate a note-off before the next repeating note-on!). But, at 4, note-off 47 should cut off the note.
I'm not sure I have time myself to build all of this, but maybe I can do some pieces of it and post them later.
hjh
-
ddw_music
My question about this:
-
Right hand first, then left hand: Note should be produced upon the left-hand note on.
- What if one right-hand note is pressed, and two or three left-hand keys? Multiple notes, or only the first? (Polyphony, I suppose, would have to be right-left, right-left, not the same as right-left-left.)
-
Left hand first, then right hand: Should it play the note upon receipt of the right hand note-on, with the left-hand velocity? Or not play anything? Or something else?
- And, if it should sound, what about left-right-right? Portamento? Ignore the second?
Because a full solution will be different depending on the answer to these questions. These need to be thought through anyway because you can't guarantee correct input -- the logic needs to handle sequences that are not ideal, too.
hjh
-
-
ddw_music
Instead of the message box, use
[pack f 500]
(could be [pack f f] but the 500 initializes the time value).The [pack] will have two inlets. Your random result goes into the left inlet; the time goes into the right inlet.
Then you can dump the "$1 500“ message box.
hjh
-
ddw_music
Branching objects in Pd are primarily:
- [select] to pass through bangs (this one suits your case)
- [route] to pass through messages
- [moses] to split numbers into < or > paths
You can do what you need here with [select].
hjh
-
ddw_music
@oid said:
@ddw_music said:
One of the sines is too low to be audible
How does the amplifier and speaker know what is audible? We can hear the tremolo effect of this beating so the speaker has to be moving at that rate and those tones we hear as tones are moving on top of that beating...
Perhaps, but when I checked standard sine vs sine amplitude modulation (ring modulation), any subaudio energy was too low to be detected, nowhere near loud enough to burn anything. There might have been some subaudio energy but it must have been at least 70-80 dB below the upper frequencies.
EDIT: SuperCollider's freqScope shows a peak up to 0 dB around 440 Hz, and low-frequency energy hovering around -80 or -90 dB for
SinOsc.ar(440) * SinOsc.ar(4)
:That's some subaudio but still much quieter than the sine waves.
hjh