• ### MIDI into [seq] and Markov chains

The basic idea is:

• read all the data into a [text]
• choose a random starting point and take the second value of the prob list as current state
• find all lines with the current state as first value, read the second values into a list and the probabilities into an array
• use [array random] and take the corresponding value from the list as new current state
• repeat from step 3.

(In the implementation another column for counting is added, so first value becomes second and so on.)

This should also work with more probabilities and also if the probabilities don't add up to 100. They don't actually have to be percentages at all (untested).

• @porres I think that [markov] is just more flexible than [anal] + [prob] or [anal] + [markov_matrix] for that matter, so i assume that the whole prob matrix approach is just going nowhere.

• I still have to check. The thing is that I like the approach where you can design your own probability matrix without the "learning" process. There's an advantage if you just sit and write down how many times you want "A" to be followed by "B" and etc...

As for the machine learning approach, your markov design is flawless and extremely versatile and I've put it in my library already

• @ingox would it be possible too choose the most similar value, if there is no same value (of course, the similarity needs to be defined first)?

• @Jona In the sense that markov chains define similarity in that a note was following another note in the source material?

Maybe you can describe a bit more how notes would be selected, maybe with an example?

• @ingox maybe it does not make so much sense for single notes (not sure how much different notes an average song has - but not too much) but for chords and for velocity perhaps. Now a problem is, if we do not look at the beginning and the end of a song, there is always a same value in a markov chain. So maybe the same value just has a higher probability than a similar value? I have to think about a concept...

• @Jona Generally speaking, a markov chain can be created not only over notes or chords, but also over abstract values. For example, in the [markov] object, the chains are created over ids. So you could for example put your notes/chords/velocity values in a [text] and use the row numbers to build the chains using [markov]. [markov] would in turn output row numbers and you could decide what to play from there.

• @ingox @porres Hi, just stumbled across your Markov remarks. A long time ago I wrote a markov external for pure data (c code, no abstraction) which can handle markov chains of any order (realtime adjustable) and has the option for internal/external feedback changing probabilities and such which is still functional. As it uses binary search trees it is extremely fast and has (somewhat limited) support for lists of integers as elements (like dtime, keynum, velocity or chords).
It is quite flexible as it also allows to define deviations in the values to still being considered as being equal and more.
It's open source and I can dust it off and send it in case you're interested. I'm never on this forum and just registered for letting you know. Don't know if I receive/see any answer from you.

• @Orm that sounds great. Would be nice to try your external.

• I just applied recently with my midifilemarkovgenerator which is based on this abstraction for the MIDI innovation awards:
https://www.midi.org/component/zoo/item/midifilemarkovgenerator
Not sure, if it makes sense. But you can vote for it or something else until the 14th of may, if you like it...

Posts 72 | Views 34625
Internal error.

Oops! Looks like something went wrong!