I'm trying to playthrough audio on OS X from the microphone to headphones with non-interleaved data ("Not interleaved!" is printed by Novocaine). All of the examples in the project are for interleaved data, so I wasn't quite sure what to do.
Additionally, there are 236 (sometimes 235) output frames, whereas there are 512 input frames, which could also cause problems -- I'm not sure.
With this code, audio playsthrough to the microphone, but it seems to have a lower pitch and sound garbled (I can't make out words):
self.audioManager = [Novocaine audioManager];
self.ringBuffer = new RingBuffer(32768, 2);
__weak AppDelegate * wself = self;
Novocaine *audioManager = [Novocaine audioManager];
[audioManager setInputBlock:^(float *newAudio,
UInt32 numFrames, // 512
UInt32 numChannels) // 2
{
wself.ringBuffer->AddNewInterleavedFloatData(newAudio, numFrames, numChannels);
}];
[audioManager setOutputBlock:^(float *audioToPlay,
UInt32 numFrames, // 236 (sometimes 235)
UInt32 numChannels) // 2
{
wself.ringBuffer->FetchInterleavedData(audioToPlay, numFrames, numChannels);
}];
[self.audioManager play];
How can I fix my code? RingBuffer
's FetchData(float *outData, SInt64 numFrames, SInt64 whichChannel, SInt64 stride)
looks appropriate, but I'm not sure what to pass for the stride
parameter.
This data about the audioManager
may be helpful:
NSLog(@"Is interleaved = %i",audioManager.isInterleaved); // 0
NSLog(@"Input available = %i",audioManager.inputAvailable); // 1
NSLog(@"Number of Input channels = %i",audioManager.numInputChannels); // 2
NSLog(@"Number of Output channels = %i",audioManager.numOutputChannels); // 2
NSLog(@"Sampling rate = %f",audioManager.samplingRate); // 96000.000000
NSLog(@"Bytes per sample = %i",audioManager.numBytesPerSample); // 4
AudioStreamBasicDescription inputFormat = audioManager.inputFormat;
/*
(AudioStreamBasicDescription) $0 = {
(Float64) mSampleRate = 96000
(UInt32) mFormatID = 1819304813
(UInt32) mFormatFlags = 9
(UInt32) mBytesPerPacket = 8
(UInt32) mFramesPerPacket = 1
(UInt32) mBytesPerFrame = 8
(UInt32) mChannelsPerFrame = 2
(UInt32) mBitsPerChannel = 32
(UInt32) mReserved = 0
}
*/
AudioStreamBasicDescription outputFormat = audioManager.outputFormat;
/*
(AudioStreamBasicDescription) $1 = {
(Float64) mSampleRate = 96000
(UInt32) mFormatID = 1819304813
(UInt32) mFormatFlags = 41
(UInt32) mBytesPerPacket = 4
(UInt32) mFramesPerPacket = 1
(UInt32) mBytesPerFrame = 4
(UInt32) mChannelsPerFrame = 2
(UInt32) mBitsPerChannel = 32
(UInt32) mReserved = 0
}
*/
When I use the code in Mac AppDelegate.mm
for a simple delay, words are intelligible but the pitch is very low (the following code is copied from the sample project unchanged):
// A simple delay that's hard to express without ring buffers
// ========================================
[self.audioManager setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
wself.ringBuffer->AddNewInterleavedFloatData(data, numFrames, numChannels);
}];
int echoDelay = 11025;
float *holdingBuffer = (float *)calloc(16384, sizeof(float));
[self.audioManager setOutputBlock:^(float *outData, UInt32 numFrames, UInt32 numChannels) {
// Grab the play-through audio
wself.ringBuffer->FetchInterleavedData(outData, numFrames, numChannels);
float volume = 0.8;
vDSP_vsmul(outData, 1, &volume, outData, 1, numFrames*numChannels);
// Seek back, and grab some delayed audio
wself.ringBuffer->SeekReadHeadPosition(-echoDelay-numFrames);
wself.ringBuffer->FetchInterleavedData(holdingBuffer, numFrames, numChannels);
wself.ringBuffer->SeekReadHeadPosition(echoDelay);
volume = 0.5;
vDSP_vsmul(holdingBuffer, 1, &volume, holdingBuffer, 1, numFrames*numChannels);
vDSP_vadd(holdingBuffer, 1, outData, 1, outData, 1, numFrames*numChannels);
}];
Not having worked with audio before, Novocaine blew me away with how simple the interface was compared to the code I was writing following Apple's Audio Queue Services or AVFoundation tutorials. Great work!