Giter Club home page Giter Club logo

novocaine's Introduction

An analgesic for high-performance audio on iOS and OSX.

Really fast audio in iOS and Mac OS X using Audio Units is hard, and will leave you scarred and bloody. What used to take days can now be done with just a few lines of code.

Getting Audio

Novocaine *audioManager = [Novocaine audioManager];
[audioManager setInputBlock:^(float *newAudio, UInt32 numSamples, UInt32 numChannels) {
	// Now you're getting audio from the microphone every 20 milliseconds or so. How's that for easy?
	// Audio comes in interleaved, so,
	// if numChannels = 2, newAudio[0] is channel 1, newAudio[1] is channel 2, newAudio[2] is channel 1, etc.
}];
[audioManager play];

Playing Audio

Novocaine *audioManager = [Novocaine audioManager];
[audioManager setOutputBlock:^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels) {
	// All you have to do is put your audio into "audioToPlay".
}];
[audioManager play];

Does anybody actually use it?

Yep. Novocaine is result of three years of work on the audio engine of Octave, Fourier and oScope, a powerful suite of audio analysis apps. Please do check them out!

A thing to note:

The RingBuffer class is written in C++ to make things extra zippy, so the classes that use it will have to be Objective-C++. Change all the files that use RingBuffer from MyClass.m to MyClass.mm.

Want some examples?

Inside of ViewController.mm are a bunch of tiny little examples I wrote. Uncomment one and see how it sounds.
Do note, however, for examples involving play-through, that you should be using headphones. Having the
mic and speaker close to each other will produce some gnarly feedback.

Want to learn the nitty-gritty of Core Audio?

If you want to get down and dirty, if you want to get brave and get close to the hardware, I can only point you to the places where I learned how to do this stuff. Chris Adamson and Michael Tyson are two giants in the field of iOS audio, and they each wrote indispensable blog posts (this is Chris's, this is Michael's). Also, Chris Adamson now has a whole gosh-darned BOOK on Core Audio. I would have done unspeakable things to get my hands on this when I was first starting.

Analytics

novocaine's People

Contributors

alexbw avatar andrewsardone avatar casbreuk avatar coryalder avatar danielmj avatar demonnico avatar jaden-young avatar jocull avatar jonasgessner avatar mike-es avatar ndonald2 avatar omygaudio avatar pinxue avatar porgery avatar rc1 avatar rweichler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

novocaine's Issues

Project site title

A minor issue, but the Github pages project you put together for this still has what I assume is the default Twitter Bootstrap title: "Bootstrap, from Twitter"

I know it makes putting sites together easier, but details like these help with the credibility of an open source project.

Cannot stop playback

Hello,

is there a way to stop currently playing audio?
I've tried this

[audioManager pause];

... and this

[fileReader stop];

Both are not good solutions. The first one stop playing fine but cannot play it again.
Second doesn't really stops playing. It just stops file reader.

I've also tried to put stop handler into setOutputBlock

[self.audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
    if (stopPlaying) {
        return;
    }
    ...
}

Is there any good solution to stop/play/resume audio playback?

Multiple runs of file write example gives error -48 duplicate filename on second execution and all following

Hi

When I uncomment the filewrite example code in the demo view controller, on the first run everything is fine, but on the second run,

CheckError(ExtAudioFileCreateWithURL(audioFileRef, kAudioFileM4AType, &outputFileDesc, NULL, 0, &_outputFile), "Creating file");

exits the app with error code -48, which is declared in MacErrors.h as duplicate filename.

I will try to find a work around this and post it here.

best

What would be the right way?

The guide says "All you have to do is put your audio into "audioToPlay". What would be the appropriate way to do it? I have a NSData with the audio.

Thank you very much!

Transition iOS code to use AVAudioSession instead of AudioSession

As of iOS7, the pure-C AudioSession API is deprecated. Not a high priority here as a lot of the corresponding AVAudioSession capabilities were only rolled out with iOS 5 or 6, but definitely an issue to have on record if Novocaine is going to have a long lifetime.

Fix output interleaving/summing for iOS devices playing through mono speaker

There is a bug with the output callback for iOS devices playing through the integrated speaker . The number of available channels does not match the number of buffers provided by the system to the render callback (the audio session reports 1 output channel, but 2 buffers are provided to the callback).

In that case, novocaine's sample buffer doesn't get indexed correctly and results in clicks in the output. I have a fix brewing on this branch but I want to make sure it's actually well-thought-out and not just an incidental fix.

AudioFileWriter not working

Hi,
I'm trying to use novocaine in order to record some audio from the mic and to write this audio to a m4a file.

The file is created, but it's not readable. The contents of the file looks like:

0000 00 00 00 1C 66 74 79 70 4D 34 41 20 00 00 00 00 ....ftypM4A ....
0010 4D 34 41 20 6D 70 34 32 69 73 6F 6D 00 00 00 00 M4A mp42isom....
0020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0090 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00B0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00C0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00D0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00E0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00F0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0100 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0110 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0120 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0130 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0140 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0150 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0160 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................

There is a lot of NULL values, and the file doesn't play in any player.

My first idea was the microphone was not working, but I tried to copy/paste your example (the one who shows the decibel levels) and I got a lot of values (from -50 to -10). I assume this is what we expect. So, the microphone is working.

Then, I tried to run your sample project itself and uncommented the code for:// AUDIO FILE WRITING YEAH!
and got the same problem, the MyRecording.m4a is created, but doesn't play in any player.

// init
self.ringBuffer = new RingBuffer(32768, 2);
self.audioManager = [Novocaine audioManager];
[self.audioManager play];

...
self.fileWriter = [[AudioFileWriter alloc]
                       initWithAudioFileURL:outputFileURL
                       samplingRate:self.audioManager.samplingRate
                       numChannels:self.audioManager.numInputChannels];
...

__weak XViewController * wself = self;
self.audioManager.inputBlock = ^(float *data, UInt32 numFrames, UInt32 numChannels) {
    // if I try to show the data here, I have values...
    [wself.fileWriter writeNewAudio:data numFrames:numFrames numChannels:numChannels];
};

I'm using Xcode 5 DP5, an iPhone5 on iOS7 beta-5 and a iPad mini on iOS6.1.3. The problem is the same on each device.

Ho, yes, and the log when I init the audio is:

AudioRoute: Speaker
Input available? 1
AudioRoute: ReceiverAndMicrophone
Input available? 1
We've got 1 input channels
We've got 1 output channels
Current sampling rate: 44100.000000

If you can help, it would be very appreciated :)

Move away from vDSP_vsadd for copies

Used to think vDSP_vsadd was faster for vector copies. It is for big, big arrays, but not even by that much. We're never copying blocks of audio big enough for it to matter, and it makes things harder to read in the first place:

Test code:
int numIterations = 10000;

for (int powerNumPoints = 0; powerNumPoints < 20; ++powerNumPoints) {
    int numPoints = powf(2,powerNumPoints);
    float *sourceArray = (float *)calloc(numPoints, sizeof(float));
    float *destArray = (float *)calloc(numPoints, sizeof(float));

    NSDate *start_memcpy = [NSDate date];
    for (int i=0; i < numIterations; ++i) {
        memcpy(destArray, sourceArray, numPoints*sizeof(float));
    }
    NSTimeInterval timeInterval = -[start_memcpy timeIntervalSinceNow];
    timeInterval = (timeInterval*1.0e9) / (float)numIterations;
    NSLog(@"Memcpy:  %f %d", timeInterval, numPoints);

    float zero = 0.0;
    NSDate *start_vdsp = [NSDate date];
    for (int i=0; i < numIterations; ++i) {
        vDSP_mmov(sourceArray, destArray, numPoints, 1, numPoints, numPoints);
    }
    timeInterval = -[start_vdsp timeIntervalSinceNow];
    timeInterval = (timeInterval*1.0e9) / (float)numIterations;

    NSLog(@"vDSP: %f %d", timeInterval, numPoints);

    free(sourceArray);
    free(destArray);
}

Results on an iPhone 4S:

Memcpy: 94.097853 1
vDSP: 105.702877 1
Memcpy: 88.298321 2
vDSP: 139.904022 2
Memcpy: 122.201443 4
vDSP: 127.899647 4
Memcpy: 122.100115 8
vDSP: 143.897533 8
Memcpy: 95.701218 16
vDSP: 107.103586 16
Memcpy: 171.601772 32
vDSP: 183.397532 32
Memcpy: 209.200382 64
vDSP: 222.498178 64
Memcpy: 270.998478 128
vDSP: 285.500288 128
Memcpy: 468.504429 256
vDSP: 516.998768 256
Memcpy: 862.401724 512
vDSP: 860.202312 512
Memcpy: 1583.600044 1024
vDSP: 1588.600874 1024
Memcpy: 3027.701378 2048
vDSP: 3074.598312 2048
Memcpy: 5952.799320 4096
vDSP: 5932.098627 4096
Memcpy: 15380.203724 8192
vDSP: 15314.501524 8192
Memcpy: 32515.496016 16384
vDSP: 32122.302055 16384
Memcpy: 63448.399305 32768
vDSP: 63298.600912 32768
Memcpy: 130549.299717 65536
vDSP: 125711.697340 65536
Memcpy: 357867.598534 131072
vDSP: 356899.499893 131072
Memcpy: 1716682.499647 262144
vDSP: 1716736.400127 262144
Memcpy: 3664638.602734 524288
vDSP: 3660620.099306 524288

How to change sample rate

Hi, could you explain how to change current sample rate?

Lines
inputFormat.mSampleRate = 44100.0;
outputFormat.mSampleRate = 44100.0;

dont affect to this

thanks

How to set kAudioSessionMode_Measurement

Hello,

Is it/would it be possible to set kAudioSessionMode_Measurement? I have tried add the following but it did not work.

// try to set the session mode
UInt32 sessionMode = kAudioSessionMode_Measurement;
CheckError( AudioSessionSetProperty (kAudioSessionMode_Measurement,
                                     sizeof (kAudioSessionMode_Measurement),
                                     &sessionMode), "Could not set mode");

Thanks,
Ross

Add CheckError to setting of kExtAudioFileProperty_ClientDataFormat in AudioFileWriter

When I get a minute and some git skills I can take care of this, but is there any reason you're not running that ExtAudioFileSetProperty through CheckError? Looks like that's where my recording is currently failing. I've been mucking with AudioFileWriter.m, so it's possible I caused the error. But wrapping ExtAudioFileSetProperty would catch the error at least.

Garbled iPhone5 playback

I'm recording MP3's with mono input and mono output on an iPhone 4 and iPhone 4S and they playback fine on those devices. They also play fine on my mac.

However, on the iPhone 5 they're low pitched and garbled. I haven't tried them on an iPad.

Any ideas? I'll be poking around and I'll let you know if I figure it out.

Thanks!

ARC Compatibility

Novocaine currently doesn't work out of the box with Automatic Reference Counting (ARC) enabled. This is something that it definitely should support in the future.

Audio on iPhone defaults to playing from reciever

Probably cause:
kAudioSessionCategory_PlayAndRecord
Apple designed this correctly in order to avoid feedback.

Possible solution:
intelligent setting of audio category based on whether there's a nil input or output block.

how does the output format work?

Hi everyone,

im pretty new to xcode programming and has been assigned to work on an audio project. i was just wondering how does the outputformat (Linear pcm) implement into the mp3 audio file? and is there any way i can verify that it is actually converted to PCM data?

Clicking in Playback

I'm hearing a lot of clicking on playback (I uncommented and corrected the noise code you've got in the example, and also tried firing off nice sine waves. (I haven't been able to check if there are also problems with recording.)

Recordings with external mic are low pitched and garbled

If I record via internal microphone, life is good.
If I plug in an external mic and record, the audio is completely fubar (low pitched and garbled).
If I unplug the external mic and record again, the recorded audio is fine.

Likewise, if I quit the app (double-tap on home button, quit the app in the popup bar at the bottom of the screen), then plug in the mic, then boot the app, I still get the same results as above.

Anyone have this issue and find a way to fix it?

I was wondering if it was some weird hardware issue, so I used the same mic with a 3rd party app, and it worked just fine in that app. That would mean it's not the hardware (either the mic or the iPod4)...

Unable to play a .wav file using the sample...

Hi Alex, would you know why .wav files wont play on this example? Which parameters do I need to change/tweak to make it play? The wav file is sample rate: 11025, bits per sample: 8, channels: 1

AudioSession

I think many users (including me) will need to be able to configure blocks for handling AudioSession events (and provide explicit controls for setting up the AudioSession).

Adding in Fancier Audio Units

Currently, novocaine assumes that there's only an input audio unit and an output audio unit. What's neat (and sometimes necessary) is to add in other cool audio units, like samplers or reverb effects and the like.

Thing is: I have no idea how to structure this in an elegant way. Novocaine has to stay dead-simple to use, but this is a feature that would be quite beneficial to many, if done right.

All thoughts and opinions welcome.

How to change input and output audio settings ??

I am working on an iOS audio processing app where i would like to use novocaine project. But i need to change the default input and output audio settings. The settings is(may be, but i am not sure.)...

inputFormat.mSampleRate = 44100.0;
outputFormat.mSampleRate = 44100.0;
mBitsPerChannel = 32;
mBytesPerSample = 4
mNumberOfChannels = 2

But i need to change the input and output audio settings to the following...

inputAudioStream.mSampleRate = 8000.0;
inputAudioStream.mFormatID = kAudioFormatLinearPCM;
inputAudioStream.mBitsPerChannel = 16;
inputAudioStream.mBytesPerFrame = 2;
inputAudioStream.mChannelsPerFrame = 1;
inputAudioStream.mFramesPerPacket = 1;
inputAudioStream.mReserved = 0;

outputAudioStream.mSampleRate = 8000.0;
outputAudioStream.mFormatID = kAudioFormatLinearPCM;
outputAudioStream.mBitsPerChannel = 16;
outputAudioStream.mBytesPerFrame = 2;
outputAudioStream.mChannelsPerFrame = 1;
outputAudioStream.mFramesPerPacket = 1;
outputAudioStream.mReserved = 0;

Can anyone please suggest me that from where and how i will be able to do this ? Any sample code would be very help full. I have checked this issue but i didn't try that yet >> #39

Thanks in advance for your help and time.

Switch parameter access to THIS->_paramName

Michael Tyson (of Tasty Pixel fame) brought to my attention that you can access ObjC primitives using simple pointers. This could give us a few microseconds extra processing time inside the real-time loop.

AudioFileReader cannot handle changes in number of channels

If the number of output channel changes during play using AudioFileReader, and the current number of channels is larger than the initial one, the sample app will crash.

The cause is probably because Novocaine, AudioFileReader and its RingBuffer all keeps the number of output channels but they are not consistent when number of channels changes.

Memory access violation if all I do is: "mNVAudioManager = [Novocaine audioManager]"

The only thing I do with Novocaine in my current iOS project is:
mNVAudioManager = [Novocaine audioManager];"

I have commented out all other code that would try to use the audio manager / novocaine functions.

It crashes with a memory access violation in novocaine::inputCallback(), on the second line:

if( !sm.playing ) <-- memory access violation

I am using ARC. Disabled ARC for all Novocaine files. .mm file extensions for files using Novocaine.
Compiler set to ObjectiveC++.
No compiler errors or warnings.

How do I playthrough audio on OS X with non-interleaved data?

I'm trying to playthrough audio on OS X from the microphone to headphones with non-interleaved data ("Not interleaved!" is printed by Novocaine). All of the examples in the project are for interleaved data, so I wasn't quite sure what to do.

Additionally, there are 236 (sometimes 235) output frames, whereas there are 512 input frames, which could also cause problems -- I'm not sure.

With this code, audio playsthrough to the microphone, but it seems to have a lower pitch and sound garbled (I can't make out words):

self.audioManager = [Novocaine audioManager];
self.ringBuffer = new RingBuffer(32768, 2);
__weak AppDelegate * wself = self;
Novocaine *audioManager = [Novocaine audioManager];

[audioManager setInputBlock:^(float *newAudio,
                              UInt32 numFrames, // 512
                              UInt32 numChannels) // 2
 {
     wself.ringBuffer->AddNewInterleavedFloatData(newAudio, numFrames, numChannels);
 }];

[audioManager setOutputBlock:^(float *audioToPlay,
                               UInt32 numFrames, // 236 (sometimes 235)
                               UInt32 numChannels) // 2
 {
     wself.ringBuffer->FetchInterleavedData(audioToPlay, numFrames, numChannels);
 }];

[self.audioManager play];

How can I fix my code? RingBuffer's FetchData(float *outData, SInt64 numFrames, SInt64 whichChannel, SInt64 stride) looks appropriate, but I'm not sure what to pass for the stride parameter.

This data about the audioManager may be helpful:

NSLog(@"Is interleaved = %i",audioManager.isInterleaved); // 0
NSLog(@"Input available = %i",audioManager.inputAvailable); // 1
NSLog(@"Number of Input channels = %i",audioManager.numInputChannels); // 2
NSLog(@"Number of Output channels = %i",audioManager.numOutputChannels); // 2
NSLog(@"Sampling rate = %f",audioManager.samplingRate); // 96000.000000
NSLog(@"Bytes per sample = %i",audioManager.numBytesPerSample); // 4
AudioStreamBasicDescription inputFormat = audioManager.inputFormat;
/*
 (AudioStreamBasicDescription) $0 = {
 (Float64) mSampleRate = 96000
 (UInt32) mFormatID = 1819304813
 (UInt32) mFormatFlags = 9
 (UInt32) mBytesPerPacket = 8
 (UInt32) mFramesPerPacket = 1
 (UInt32) mBytesPerFrame = 8
 (UInt32) mChannelsPerFrame = 2
 (UInt32) mBitsPerChannel = 32
 (UInt32) mReserved = 0
 }
 */
AudioStreamBasicDescription outputFormat = audioManager.outputFormat;
/*
 (AudioStreamBasicDescription) $1 = {
 (Float64) mSampleRate = 96000
 (UInt32) mFormatID = 1819304813
 (UInt32) mFormatFlags = 41
 (UInt32) mBytesPerPacket = 4
 (UInt32) mFramesPerPacket = 1
 (UInt32) mBytesPerFrame = 4
 (UInt32) mChannelsPerFrame = 2
 (UInt32) mBitsPerChannel = 32
 (UInt32) mReserved = 0
 }
 */

When I use the code in Mac AppDelegate.mm for a simple delay, words are intelligible but the pitch is very low (the following code is copied from the sample project unchanged):

// A simple delay that's hard to express without ring buffers
// ========================================

[self.audioManager setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
    wself.ringBuffer->AddNewInterleavedFloatData(data, numFrames, numChannels);
}];

int echoDelay = 11025;
float *holdingBuffer = (float *)calloc(16384, sizeof(float));
[self.audioManager setOutputBlock:^(float *outData, UInt32 numFrames, UInt32 numChannels) {

    // Grab the play-through audio
    wself.ringBuffer->FetchInterleavedData(outData, numFrames, numChannels);
    float volume = 0.8;
    vDSP_vsmul(outData, 1, &volume, outData, 1, numFrames*numChannels);


    // Seek back, and grab some delayed audio
    wself.ringBuffer->SeekReadHeadPosition(-echoDelay-numFrames);
    wself.ringBuffer->FetchInterleavedData(holdingBuffer, numFrames, numChannels);
    wself.ringBuffer->SeekReadHeadPosition(echoDelay);

    volume = 0.5;
    vDSP_vsmul(holdingBuffer, 1, &volume, holdingBuffer, 1, numFrames*numChannels);
    vDSP_vadd(holdingBuffer, 1, outData, 1, outData, 1, numFrames*numChannels);
}];

Not having worked with audio before, Novocaine blew me away with how simple the interface was compared to the code I was writing following Apple's Audio Queue Services or AVFoundation tutorials. Great work!

Garbled input sound on iPad 1st gen and iOS 4.3.5

The input sound from the standard mic or standard earphone mic is garbled.

To repro:
Open the Novocaine iOS Example project and edit the code so it runs the "Basic playthru example" code that just takes mic input and plays it back out.

Run the code on an iPad 1st gen with iOS 4.3.5.

I am not sure if the issue is with 4.x or the iPad 1st gen as Novocaine does not run in the simulator.

inputBlock is not working....

Hi alex

When I try the demo project on iOS/Simulator, all the cases which include the "inputBlock" were broken (Basic playthru example, MEASURE SOME DECIBELS!, etc).

for example, the log below is not showing...

// Basic playthru example
[self.audioManager setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {

    NSLog(@"inputBlock!");

    float volume = 0.5;
    vDSP_vsmul(data, 1, &volume, data, 1, numFrames*numChannels);
    wself.ringBuffer->AddNewInterleavedFloatData(data, numFrames, numChannels);
}];


[self.audioManager setOutputBlock:^(float *outData, UInt32 numFrames, UInt32 numChannels) {
    wself.ringBuffer->FetchInterleavedData(outData, numFrames, numChannels);
}];

I traced to inputCallback in Novocaine.m, and the callback is not hit ("input callback" is not showing...), too.

// code here
#pragma mark - Render Methods
OSStatus inputCallback   (void                      *inRefCon,
                      AudioUnitRenderActionFlags    * ioActionFlags,
                      const AudioTimeStamp      * inTimeStamp,
                      UInt32                        inOutputBusNumber,
                      UInt32                        inNumberFrames,
                      AudioBufferList           * ioData)
{
@autoreleasepool {

    NSLog(@"input callback!!!");


    Novocaine *sm = (__bridge Novocaine *)inRefCon;

    if (!sm.playing)
        return noErr;
    if (sm.inputBlock == nil)
        return noErr;    

I test the code on Xcode 4.6.2/ 5-preview and IOS6.1/IOS7 beta2/Simulator.... and got the same result.I'm so confused about that... Does anybody have the same problem? :-(

Crash when launching from background while audio playing

Hi there,
First of all let me thank you for open sourcing this. It looks like a very valuable tool coming from somebody who has suffered from CoreAudio coding.

I've been checking out your library to see how robust it is in regards to backgrounding and interuptions etc.

I've found a crashing bug that initially makes me think it isn't that robust but maybe you can comment on it.

Testing on iPad 3 with iOS 5.1.
Simple app that launches a recording queue with:
audioManager = [Novocaine audioManager];
//This starts an input Block for recording. Set to audioManager.inputBlock=nil to stop
audioManager.inputBlock = ^(float *data, UInt32 numFrames, UInt32 numChannels) {
NSLog(@"stillgoing:%d",(int)numFrames);
};

I tried going in and out of background and that appears to work fine (for short term at least).

But if I send the app to background, then start playing audio from the stock Music app, then go back to the app running Novocaine I get this error in the console:
Error: Checking number of input channels ('!cat')

iPhone 5 returns 3 input channels

On iPhone5 (and probably iPad mini) this code returns 3 input channels

// Check the number of input channels.
// Find the number of channels
UInt32 size = sizeof(self.numInputChannels);
UInt32 newNumChannels;
CheckError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&newNumChannels), "Checking number of input channels");
self.numInputChannels = newNumChannels;

And app crashes

On the fly channel mapping for readers and writers

Should be able to automatically (using channel maps) change the channel count of audio coming or going to a file.

Did this at one point, a long time ago. It looked roughly like this:

// If there's only single-channel input, set up a channel map to record to both channels
if (self.numChannels == 1) {
    // Get the underlying AudioConverterRef
    UInt32 size = sizeof(AudioConverterRef);
    AudioConverterRef conv = NULL;
    XThrowIfError( ExtAudioFileGetProperty(outputFileRef, kExtAudioFileProperty_AudioConverter, &size, &conv), "Could not get underlying converter for ExtAudioFile");
    if (conv)
    {
        // This should be as large as the number of output channels,
        // each element specifies which input channel's data is routed to that output channel
        SInt32 channelMap[] = { 0, 0 };
        XThrowIfError( AudioConverterSetProperty(conv, kAudioConverterChannelMap, 2*sizeof(SInt32), channelMap), "Could not set up mono->stereo channel map");
    }
}

Warnings and Link Errors

When I tried to incorporate novocaine into my own project I get a lot of tricky warnings and link errors. It's kind of neat how you've integrated C++ and Obj-C++ into a minimal number of source files, but I wonder if you're confusing the compiler/linker

  1. I get warnings which I frankly do not understand surrounding the blocks. ("Capturing 'self' ...")

  2. Probably related: I get warnings about these declarations:

@property (nonatomic, retain) OutputBlock outputBlock; @property (nonatomic, retain) InputBlock inputBlock;

It wants me to use copy instead of retain.

I assume that my project settings are wrong in some subtle way (I've tried to match them to yours) โ€” but I want to understand what's going on since I need to integrate this stuff into a much larger project.

adding Novocaine to my project

Hi,
First off I would like to say great work and thank you for contributing this open source project, it is much appreciated by many of us.

I'm having a little trouble adding this to my iOS project, and actually i'm not quite sure if this is the best solution for me so I thought I would ask you since you seem to know a good bit about audio.

I am designing an application for kids that allows them to play along with certain songs/stories. I dont need to do any complicated audio processing, all the sounds are going to be pre recorded samples in specific keys. The only manipulation to the audio that I'll need is that I'll need to play up to 6 different audio clips at the same time and I will need to be able to adjust the levels of each audio track.

I tried adding your code to my project but I am having trouble with dependencies... and the whole .mm thing. If you could include a short guide on the easiest way to add this to an existing project it would be greatly appreciated.

Option to make play in mono

This might sound silly, but I would find it useful to make the sound, music, play in mono sound. Can this be done in the setOutputBlock?

Spamming Start/Stop recording causes memory exception (fix shown)

If I rapidly start and stop recording, I will very quickly get a memory exception caused by ExtAudioFileWriteAsync().
The error is shown to occur in the assembler for: AudioRingBuffer::GetTimeBounds()

I applied the instructions found here:
http://stackoverflow.com/questions/7961087/exc-bad-access-in-audioringbuffergettimebounds

(wrapping a mutex around each call to ExtAudioFileWriteAsync and ExtAudioFileDispose)

Now I can spam away without it crashing.

Audio Reader NSURL Refresh

Hey guys, I have on Audio Player instance and I need to change its input URL once it has finished playing a song.

My code works by getting an instance of the iPodMusicPlayer from the MPMusicPlayerController class within the MediaPlayer.framework. It then gets the now playing items NSURL and runs it through TSLibraryImport (https://github.com/tapsquare/TSLibraryImport) to allow me to use it within my app. TSLibraryImport then provides me with another NSURL which I then feed to Novocains AudioReader Class. The reader is working fine when I run my "getSongData" method once but I need to run this method every time the NSURL changes. I have code to do this,but when I run the method the audio reader is playing however I do not hear any sound. I'm thinking this could have something to do with some data that it caches from when the reader was first initialised, but i'm not so sure. Basically, I just need to be able to call the "getSongData" method multiple times, asif I was making a new player with a different loop each time.

Heres my code ( I won't include the header as i'm sure you can work out whats global etc):

https://ghostbin.com/paste/jj6f8

I would appreciate some guidance as i'm sure its a relatively simple problem to some.

Note: I know the reader is playing and that I am getting a new NSURL each time due to logging.

AudioFileReader duration not working

hi Alex,

Trying to get the duration value using AudioFileReader wasn't working. I fixed it but amending the .h file with below and added an empty setDuration: method.
@Property (getter=getDuration, setter=setDuration:) float duration;

Also using your example code I was recording and saving audio to disk, then playing it back. On playback the fileReader.currentTime incremented from 0.0 up to 4.00000 seconds and then froze (even though the sound continued to loop) The fileReader.duration printed out as 4.655601. To stop the sound looping I was hope to do something like below

if(fileReader.currentTime >= fileReader.duration)
{
audioManager.outputBlock = nil;
[fileWriter release];
}

any suggestions how this might be solved? Many Thanks

Missing AudioUnit.h - Unable to build for device

Is anyone else experiencing this?

I'm using the unmodified iOS Novocaine example and when I try building for a device (e.g. my iPhone) I receive a "'AudioUnit/AudioUnit.h' file not found" error. There is an AudioUnit.framework in the project and linked to the Mac target but the AudioUnit.Framework that I'm guessing should be linked to the iOS target is red (marked as missing in Xcode). When I go to add the framework I don't see it under the iOS target but I do see it under the Mac target. Was this framework deprecated??

I'm running Xcode 4.3.2 and iOS 5.1. No pre-release stuff here.

Thanks in advance for any insight into this.

Chris

Microphone monitoring disabled by AVCapture session (iOS 5.1.1)

I'm using Novocaine to monitor the mic's input volume in an app I am working on, to use the values to provide visual feedback to the user. The app also allows for the user to record a video message. Initializing the AVCapture sessions for the front camera and microphone after the Novocaine audioManager method is called permanently disables Novocaine microphone observation. I largely borrowed the voice modulator code from the iOS example project.

To reproduce:
Create the singleton audioManager instance.
Create an AVCaptureSession that listens to the microphone and front-facing camera (I also have an AVCaptureVideoPreviewLayer in the view).
audioManager instance stops outputting magnitude value to console. Re-initializing does not rectify the problem, nor does disabling the AVCaptureSession (both with or without removing the microphone input).

I am using an iPhone4S on iOS 5.1.1.

Including the NOVOCAINE into projects

I have difficulty when I include the novocaine into my project. I get the Unknown type name 'class' error. If i have Novocaine as a project as downloaded from the Github - I am able to get it running. Any help?

Signal generator not making a clean signal

Hi,
In Xcode 4.6.3 and the latest iOS betas (for iOS 5.1 and up), when I compile the latest source after commenting in the signal generator line 102-118 in ViewController.mm, and commenting out the default file reader, then both on device and in the simulator, the sound produced is not only the signal, but also a lower frequency sound. If you change frequency to 417.f, this is particularly audible.

If you can reproduce this, any idea on what I can do to remedy this? For good measure, I've included a zip of the project where you can just build and run to hear the non-clean sound: https://www.dropbox.com/s/sfsp13zxtqm09n2/Novocaine-SignalGeneratorError.zip

Cheers

Nik

(AudioFileReader) setCurrentTime results in glitchy playback when used while playing

setCurrentTime works well to start the audiofile with a time offset but when currentTime is set to a specific point in the audiofile during playback the result is a very jerky audio playback. By commenting out these lines, this did not happen:

    - (void)setCurrentTime:(float)thisCurrentTime
    {
       dispatch_async(dispatch_get_main_queue(), ^{
          //[self pause];
          ExtAudioFileSeek(self.inputFile, thisCurrentTime*self.samplingRate);

          //[self clearBuffer];
          //[self bufferNewAudio];

          //[self play];
          });
     }

output on iOS is mono

Am I crazy? Has this always been the case and nobody noticed until now? Novocaine is outputting audio in mono through headphones. ioData->mBuffers[0].mNumberChannels is 1.

If I figure out why, I'll make a note, and, with luck and a good attitude, commit some code.

Seeking in audio reader results in distortions

Whenever I try to seek in an audio file (with the help of the setCurrentTime method), the sound becomes distorted. I would appreciate any indication on how the seeking should be implemented in the application.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.