• chmod

    @chmod

    I ended up ditching the ring buffers and doing it like this — I haven't seen any issues tapping from an mp4 input so far without the use of ring buffers:

    if (context->frameSize && outputBufferSize > 0) {
        if (bufferListInOut->mNumberBuffers > 1) {
            float *left = (float *)bufferListInOut->mBuffers[0].mData;
            float *right = (float *)bufferListInOut->mBuffers[1].mData;
                
            //manually interleave channels
            for (int i = 0; i < outputBufferSize; i += 2) {
                context->interleaved[i] = left[i / 2];
                context->interleaved[i + 1] = right[i / 2];
            }
            [PdBase processFloatWithInputBuffer:context->interleaved outputBuffer:context->interleaved ticks:64];
            //de-interleave
            for (int i = 0; i < outputBufferSize; i += 2) {
                left[i / 2] = context->interleaved[i];
                right[i / 2] = context->interleaved[i + 1];
            }
        } else {
            context->interleaved = (float *)bufferListInOut->mBuffers[0].mData;
            [PdBase processFloatWithInputBuffer:context->interleaved outputBuffer:context->interleaved ticks:32];
        }
    }
    

    posted in libpd / webpd read more
  • chmod

    Hi there, I'm working on a project that involves streaming audio from an AVPlayer video player object into libpd. For the process loop of the tap, I used PdAudioUnits render callback code as a guide; but I realized recently that the audio format expected by libpd is not the same as the audio coming from the tap — that is, the tap is providing two buffers of non-interleaved audio data in the incoming AudioBufferList, whereas libpd expects interleaved samples. Does anyone know of a way I can work around this?

    I think that I need to somehow create a new AudioBufferList or float buffer and interleave the samples in place; but that seems expensive to me. If anyone could give me some pointers I would greatly appreciate it!

    static void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
    {
        OSStatus status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut);
        if (noErr != status) {
            NSLog(@"Error: MTAudioProcessingTapGetSourceAudio: %d", (int)status);
            return;
        }
        
        TapProcessorContext *context = (TapProcessorContext *)MTAudioProcessingTapGetStorage(tap);
        
        // first, create the input and output ring buffers if they haven't been created yet
        if (context->frameSize != numberFrames) {
            NSLog(@"creating ring buffers with size: %ld", (long)numberFrames);
            createRingBuffers((UInt32)numberFrames, context);
        }
        
        //adapted from PdAudioUnit.m
        float *buffer = (float *)bufferListInOut->mBuffers[0].mData;
        
        if (context->inputRingBuffer || context->outputRingBuffer) {
            
            // output buffer info from ioData
            UInt32 outputBufferSize = bufferListInOut->mBuffers[0].mDataByteSize; // * 2 solved faint avplayer issue
            UInt32 outputFrames = (UInt32)numberFrames;
    //        UInt32 outputChannels = bufferListInOut->mBuffers[0].mNumberChannels;
            
            // input buffer info from ioData *after* rendering input samples
            UInt32 inputBufferSize = outputBufferSize;
            UInt32 inputFrames = (UInt32)numberFrames;
            UInt32 framesAvailable = (UInt32)rb_available_to_read(context->inputRingBuffer) / context->inputFrameSize;
                    
            //render input samples
            
            while (inputFrames + framesAvailable < outputFrames) {
                // pad input buffer to make sure we have enough blocks to fill auBuffer,
                // this should hopefully only happen when the audio unit is started
                rb_write_value_to_buffer(context->inputRingBuffer, 0, context->inputBlockSize);
                framesAvailable += context->blockFrames;
            }
            rb_write_to_buffer(context->inputRingBuffer, 1, buffer, inputBufferSize);
            
            // input ring buffer -> context -> output ring buffer
            char *copy = (char *)buffer;
            while (rb_available_to_read(context->outputRingBuffer) < outputBufferSize) {
                rb_read_from_buffer(context->inputRingBuffer, copy, context->inputBlockSize);
                [PdBase processFloatWithInputBuffer:(float *)copy outputBuffer:(float *)copy ticks:1];
                rb_write_to_buffer(context->outputRingBuffer, 1, copy, context->outputBlockSize);
            }
            
            // output ring buffer -> audio unit
            rb_read_from_buffer(context->outputRingBuffer, (char *)buffer, outputBufferSize);
        }
    }
    

    posted in libpd / webpd read more
  • chmod

    It was brought to my attention that the zipfile in my last post was being flagged by different browsers as a virus — here are the files contained in that zip:

    bellkit_test.pd
    bellkit_test.wav
    bellkit_test2.wav

    posted in technical issues read more
  • chmod

    Hi there —

    I'm trying to detect pitches from a Bell Kit instrument for a music education app; and so far I'm having trouble picking up MIDI pitches above the 90s/100s; which is the target range for this instrument.

    Here's a test patch I'm using to determine whether [sigmund~] or [helmholtz~] does a better job at reporting higher pitched content. It seems that sigmund~ is generally better at detecting the higher pitches but it seems to stop picking things up above the MIDI pitch 100 range; which is the range of the second recording in the zip file. Can anyone tell me if there is a limit for these objects for higher frequency detection, or suggest parameter configurations or different methods for tracking this instrument?

    posted in technical issues read more
  • chmod

    @ablue

    Did you make any progress with this issue? I'm in a similar situation myself where I've been asked to build an automated offline testing system for a pitch-tracking rhythm game. Since timing is a really important aspect of my patch (since it tracks not only pitches but rhythms) I'm not sure that this kind of system would be feasible in my case. I'm thinking my best bet would just be a system that automatically goes through each level one by one using a test MIDI file as input (with random variances to test accuracy). The game currently works with instruments as low as the Tuba so I'm not sure if I'd be able to use oversampling in my case; but please let me know if I'm incorrect!

    posted in technical issues read more
  • chmod

    Actually I found a relatively simple solution — if headphones are not present I add a hip~ 1500 object to the synth to filter out the fundamental. Since the iPhone speakers are tinny anyway this does the trick without making the synth seem too faint.

    posted in technical issues read more
  • chmod

    Hi everyone,

    I've run into an interesting problem with a iPhone rhythm game project I'm working on:

    Basically the game has a scrolling score of music that you can play with your instrument at the same time. My patch detects pitches in real-time and then marks notes on the screen as correct or not.

    The problem is that there is a built-in guide synth that plays each note as it scrolls (this is toggleable however). Right now I'm having the problem of the built-in guide synth scoring notes correctly without the user providing any input.

    Obviously this issue goes away once the user wears headphones, but when you are playing audio on the phone's speaker at anywhere above 50% volume notes will get triggered automatically no matter what. I have tried several techniques (noise gate, some filters, minpower settings on helmholtz~ and sigmund~) that reduce the problem but not eliminate it.

    That being said, I'm not hoping to completely eliminate this problem 100% (because it's most likely not possible), but I was wondering if anyone had any suggestions for kinds of techniques that could differentiate a real-life instrument from this built-in synth playing on the iPhone speaker, where both instruments are playing the same note?

    Thank you :)

    posted in technical issues read more
  • chmod

    @weightless Hi again —

    I was trying to look at the envelope of the signal using env~ and subtracting this output with a delayed version of itself to find out where the envelope rises and falls. This seems to work pretty well for most of the instrument samples I was working with but I ran into a problem with a flute recording with a pretty heavy vibrato. Each "vibration" of the vibrato was detected as its own attack since thats where it would peak in the waveform as well.

    Your suggestion seems like it could work — I have all of my audio files in an Ableton project right now, is it alright if I send it to you that way (with all the samples in the project folder that is)?

    posted in technical issues read more
  • chmod

    @weightless

    I tried using that method today and found that if the notes are "tongued" and there is still a continuous pitch between every articulation, the -1500 does not appear as needed and the individual attacks are not picked up. This was especially apparent when I tested using my voice (saying "laaalaaalaaalaaa" for example)— there is a continuous pitch being picked up by sigmund~ but I am articulating the note four times.

    I've been trying to look at sudden jumps in the envelope instead but I need to find something that works "one size fits all" for different kinds of instruments and different volume levels as well, so it's been pretty tricky.

    posted in technical issues read more
  • chmod

    @weightless that's a great idea, I hadn't thought of that, the fact that sigmund outputs -1500 during silences in pitch mode.

    I knew I had to use a combination of object in some way, thanks a lot!

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!