-
elden
I noticed that breeding MIDI CC# parameter values under 10 is very difficult as minimal values tend to blend back into the population, but minimal values are important for envelope parameters like attack and decay. I'll have to patch an option to weigh the randomization of parameter values for each CC# so they tend to generate smaller numbers.
And a much bigger population of at least 12 sounds would be helpful. Will take a while... -
elden
Exactly, but you don't have to create a loop, if you rout a keyboard into your synth. You just have to make sure, that the synth only gets note events from your keyboard - not the CCs, as those are coming from Ewolverine.
You can also control Ewolverine with your keyboard. Just scroll to the right to read the MIDI implementation chart. -
-
elden
@alexandros
it could very well be that these objects work. Being a pd-rookie, all my patches are dependent on external help. Thank you very much. -
elden
Hello again,
I need to compare audio against a target sample. I heard that MFCC error calculation does a good job in this field. Is there any object in pd that does that for multiple seconds long audio recordings?
regards
-
elden
Hello everyone,
as most of you know, I'm continuously developing my Ewolverine patch with which you can genetically breed sounds out of your MIDI-gear.
In order to automatically approximate synthesizer parameters, Ewolverine must compare different synth-sounds to a target sample. The problem is that sounds match differently to the target sample depending on the comparison-criterion.Examples:
Case 1:
If the selection criterion is the length of synthesized sounds in comparison to the target, the selection mechanism may choose for synth parameters that generate sounds as long as the target sample, but may pay no attention to its timbre.Case 2:
If the selection criterion is the onset, the generated sounds may all have equal onsets, but differ in length and timbre.What I need is a way of multi-objective optimization which takes all criteria into account and tells Ewolverine's selection mechanism which synthesized sound is generally nearest to the target sample.
Is there anything in pd that I could use or do you have any idea what I could do or do you know anyone who could help me?
-
elden
EWOLVERINE v.7.1 beta by Henry Dalcke 6.pd
...changed some default settings of the Target Drive and corrected the Help-subpatch a little...
-
elden
Good thought, thanks. Maybe it' s a start. I'll check it out as soon as I can.
-
elden
Hey guys,
I just checked this little thing
Seems to me that they switch between different wavetable oscillators in the time of the wavelength at key frequency.
In general, one could easily do this using an audio input switch that cycles through the different inputs at the rate of the wave frequency of the triggered midi note. If you now manipulate the different audio streams that are connected to the different audio inputs of the switch, you can edit the different waveform segments separately.My question: How can I switch between different audio inputs at the rate of a midi note frequency?
Or mighty it be less complicated to just concatenate different wavetables into a wavetable of a length equal to note frequency? What do you think how Waverazor works? -
elden
hello guys,
I used the last few days to analyze the authentic expression technology filter in Native Instruments' "Kontakt" to make my own remake of it in form of a multiple input audio source morphing tool that functions exactly like the AET filter in NI Kontakt. I'll upload a video demo on how it's done, soon. For now here's the description. This following little patch is a recreation of how Kontakt's modwheel behaves in relation to key velocity. You need it for an authentic AET experience.Henry velocity plus modwheel merger.pd
What Kontakt's channel vocoder does is this: It swaps the carrier and modulator input signal everytime a morphing has finished while at the same time routing another audio source into the respectively muted input.
This can easily be done with freeware, too - with up to 12 audio sources!
Here is how:this picture shows all softwares needed to fake the AET filter functioning.: a modular host / VST wrapper; Midicurve; RD switch 6×6; DtblkFXs
routing and parameter assignment:
- MIDI keyboard into Pd, from Pd to your DAW, inside your DAW rout it into all 'Midicurve'-plugins
- rout first and second Midicurves into a 6x6 switch each; assigning 6×6's input switching parameter to the Midi CC coming from the respective Midicurve plug-in
- rout third Midicurve to DtblkFXs, assigning it's "0.Val" parameter to Midi output CC coming from that respective Midicurve
- rout your audio sources (synths / samplers / microphones) into the RD switches - instruments 1,3,5,7 & 9 into one switch and the instruments 2,4,6 & 8 into the other switch (all instruments get a different input at the RD switches - don't put all of them into the first audio channel, otherwise they won' t morph)
- draw the transfer functions of the Midicurves as seen in the picture and make sure to place a hook at "CC" and select the CC of your modwheel!
If you turn your modwheel up, now, the switches should change their input channels exactly in the moment when a full morphing from one source to the next has been finished. - adjust DtblkFXs as seen in the picture!
Much fun with your own totally free AET morphing tool!
-
elden
Oh, thank you! That's a good start - although l don't really get the meaning of the numbers of the [connect( message, yet and what influences the positioning of an object in a patch?
-
elden
Hi,
l want to replicate certain objects a variable number of times and patch them to others automatically. Is that possible and if Yes - how?Thanks!
-
elden
Hello everyone,
I want to connect a (long short term memory) neural network to midi parameters and let it find out what assigned synth parameters influence the generated sound in which way for a sound matching application. How would you do this?Regards
-
elden
Looks pretty convincing, actually. I need to check it out in more detail. Stay tuned.
-
elden
Do you know my patch "Ewolverine"? The functionality is limited on midi for now, but I experimented with a pdvst version already and the results were similar. The only problem was that I couldn't figure out how to make a VST out of a pd patch. I was also thinking of stuff you mentioned. I don't think that's very hard to do, but first I need to convert a patch into a vst with midi and audio inputs as well as outputs. I don't use artificial neural nets. The audio analysis of sampled sounds is based on spectral comparison to a target sample. ANNs are not necessary for such operations. If I'd like to check adapted sound against some special features I want them to provide, I'd surely use ANNs, but that would probably cost me many months or even years of development. I don't think that I have the necessary endurance for that. At least not in pd. Patches would be highly complex and surely I'd myself not be able to overlook it.