6.1. AVFoundation Framework

Version 2.2 of the iPhone SDK introduced a new framework for playing and mixing audio: AVFoundation. The AVFoundation framework provides an easy-to-use interface for playing simple sound files. It may appear to be a simple class, but it does pack some punch. The framework not only plays individual sound files, but it can also play multiple sounds simultaneously. The AVAudioPlayer class provides per-sample volume control, allowing you to perform basic sound mixing, and can play a number of different file formats including .mp3, m4a, .wav, .caf, .aif, and others. In addition to this, it provides properties for reading power levels, allowing you to do something most geeks have loved to do since the Apple ][ days: build your own VU meters.

The AVFoundation framework is useful for adding sound support in applications such as instant messenger applications or simple games, which don't require sophisticated digital sound. To create digital sound streams, you'll still need to use the Audio Toolbox framework, covered later in this chapter.

In order to use the AVFoundation framework, you'll need to add it as a framework to your existing project. Right-click on the Frameworks folder in your Xcode project, and choose Addrightwards double arrowExisting Frameworks. Navigate to your SDK's Frameworks directory and choose the AVFoundation.framework folder.

You'll need to be using version 2.2 or later of the iPhone SDK in order to use the AVFoundation framework.


6.1.1. The Audio Player

The AVAudioPlayer class encapsulates a single sound to be played. The player is initialized with an NSURL object pointing to the resource to be played. To play multiple sounds concurrently, you can create a new AVAudioPlayer object for each sound. Think of the audio player, then, as a single track within a multitrack mixing board:

#import <AVFoundation/AVFoundation.h>

NSError *err;
AVAudioPlayer *player = [ [ AVAudioPlayer alloc ]
    initWithContentsOfURL: [ NSURL fileURLWithPath:
        [ [ NSBundle mainBundle ] pathForResource: @"sample"
            ofType:@"m4a" inDirectory:@"/" ] ]
    error: &err
];

You can also initialize a player with an NSData object pointing to raw data for your sound residing in memory:

#import <AVFoundation/AVFoundation.h>

NSError *err;
AVAudioPlayer *player = [ [ AVAudioPlayer alloc ]
    initWithData: myData
    error: &err
];

6.1.2. Player Properties

Once you have created and initialized an AVAudioPlayer object, you can set various properties for it.

You can mix the output from multiple players together using the player's volume property. The volume is represented as a value between 0.0 and 1.0:

player.volume = 0.5;

If the sound should repeat, set the numberOfLoops property to a nonzero value reflecting the number of iterations. The default is to play the sample only once:

player.numberOfLoops = 3;

If you'd like the sample to start playing at a particular time offset, you can place the "record needle" anywhere you like by setting the currentTime property. Later on, you'll be able to read this offset while the sample is playing. This property uses an NSTi⁠meIn⁠terval, which is the equivalent of a double floating-point representing the number of seconds into the sample to play:

NSTimeInterval currentTime = player.currentTime;
player.currentTime = 5.0;

In addition to the properties you can manually adjust, some read-only properties also exist that can help identify the characteristics of the sound you've loaded.

You can read the number of channels present in the sample from the numberOfChannels property. This will return one channel for mono samples, or two channels for stereo samples:

NSUInteger channels = player.numberOfChannels

You can also read the duration of the sample (in seconds) through the duration property. This, too, is represented as an NSTimeInterval, which is typed to a double floating-point:

NSTimeInterval duration = player.duration

6.1.3. Playing Sounds

If you're queuing up a sound and want to be able to play it immediately on demand, use the prepareToPlay method to allocate all of the sound's resources and get the sound queued to play internally. It's OK if you don't call this yourself—it will be automatically invoked when the sound is actually played. If you don't call it, though, there may be a very slight delay while the sample queues up:

[ player prepareToPlay ];

When you're finally ready to play the sound, use the player's play method:

[ player play ];

You can also stop the sound at any time, using the stop method:

[ player stop ];

6.1.4. Delegate Methods

The AVAudioPlayerDelegate protocol defines methods that you can assign to a delegate to know when sounds have finished playing, intercept errors, and receive notification of interruptions.

When the play method is invoked, control is returned immediately to your program while the sound plays in the background. To receive a notification when the sound has finished playing, assign a delegate to the player's delegate property:

player.delegate = self;

When a sound has finished playing, the delegate's audioPlayerDidFinishPlaying method is invoked. Place this method in your delegate's code to receive the notification. A Boolean value is provided in the method parameters to specify whether the sound played successfully:

- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player
    successfully:(BOOL)flag
{
    /* Additional code to perform when player is finished */
}

If an error occurs while decoding the sound, the delegate's audioPlayerDeco⁠deEr⁠rorDi⁠dOccur method is invoked. You can use this to gracefully handle errors that occurred while playing the sample:

- (void)audioPlayerDecodeErrorDidOccur:(AVAudioPlayer *)player
    error:(NSError *)error
{
    /* Code to handle decoder error */
}

If an incoming phone call, device lock, or some other activity interrupts the player, two delegate methods will be notified at the beginning and end of the interruption. They can restart the player or perform some other action, like asking your user not to take phone calls while you're playing a sound (the nerve!):

- (void)audioPlayerBeginInterruption:(AVAudioPlayer *)player {
    /* Code to handle interruption */
}

- (void)audioPlayerEndInterruption:(AVAudioPlayer *)player {
    /* Code to handle end of interruption */
}

6.1.5. Metering

The AVAudioPlayer class can report meter values, which allows your application to read the output levels of the sound you are playing. The output levels can be used to render a visual indicator of the sound as it's playing. You can read both the average power level (in decibels) and the peak power level. To enable metering, set the meterin⁠gEna⁠bled property:

player.meteringEnabled = YES;

As the sound is played, the updateMeters method can be invoked to update the meters values:

[ player updateMeters ];

The meter's average and peak power levels for each channel can then be read. Values are returned as floating-point, representing the number of decibels for each channel. These values typically range from -100.0 to 0.0:

for (int i=0; i<player.numberOfChannels; i++) {
    float power = [ player averagePowerForChannel: i ];
    float peak = [ player peakPowerForChannel: i ];
}