objective-c – 如何使用iOS上的AudioUnit.framework配置帧大小

我有一个音频应用程序,我需要捕获麦克风样本,用ffmpeg编码成mp3

首先配置音频:

/**  
     * We need to specifie our format on which we want to work.
     * We use Linear PCM cause its uncompressed and we work on raw data.
     * for more informations check.
     * 
     * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz 
     */
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = SAMPLE_RATE;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8;
    audioFormat.mBytesPerPacket     = audioFormat.mChannelsPerFrame*sizeof(SInt16);
    audioFormat.mBytesPerFrame      = audioFormat.mChannelsPerFrame*sizeof(SInt16);

录音回调是:

static OSStatus recordingCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) 
{
    NSLog(@"Log record: %lu", inBusNumber);
    NSLog(@"Log record: %lu", inNumberFrames);
    NSLog(@"Log record: %lu", (UInt32)inTimeStamp);

    // the data gets rendered here
    AudioBuffer buffer;

    // a variable where we check the status
    OSStatus status;

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;

    /**
     on this point we define the number of channels, which is mono
     for the iphone. the number of frames is usally 512 or 1024.
     */
    buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size
    buffer.mNumberChannels = 1; // one channel

    buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size

    // we put our buffer into a bufferlist array for rendering
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    // render input and check for error
    status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
    [audioProcessor hasError:status:__FILE__:__LINE__];

    // process the bufferlist in the audio processor
    [audioProcessor processBuffer:&bufferList];

    // clean up the buffer
    free(bufferList.mBuffers[0].mData);


    //NSLog(@"RECORD");
    return noErr;
}

有了数据:

inBusNumber = 1

inNumberFrames = 1024

inTimeStamp = 80444304 //始终在timeStamp中相同,这很奇怪

但是,我需要编码mp3的帧大小是1152.我如何配置它?

如果我做缓冲,这意味着延迟,但我想避免这个,因为是一个实时应用程序.如果我使用这个配置,每个缓冲区我得到垃圾尾随样本,1152 – 1024 = 128坏样本.所有样品均为SInt16.

最佳答案 您可以配置AudioUnit将使用属性kAudioUnitProperty_MaximumFramesPerSlice的每个切片的帧数.但是,我认为在您的情况下,最好的解决方案是将传入的音频缓冲到环形缓冲区,然后向编码器发出音频可用的信号.由于您正在转码为MP3,我不确定在这种情况下实时是什么意思.

点赞