AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

85 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

AVAudioSessionErrorCodeCannotInterruptOthers
We are a music app, encountered a scene, there is no way to resume playing music, so I would like to ask about the technical plan, how to achieve it. For example, when playing a video in another app, we pause the music playing and turn off the video, we should resume the music playing. Our code is implemented, so listen AVAudioSessionInterruptionNotification, when we received the notice and judge AVAudioSessionInterruptionOptionShouldResume, we play music came again, Error 560557684(AVAudioSessionErrorCodeCannotInterruptOthers) was reported. We were very confused NSError *error = nil; AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:0 error:&error]; [audioSession setActive:YES error:&error]; We compared the apple music app and found that apple music can resume playing. Here is a video of the effects of our app: https://drive.google.com/file/d/1J94S2kxkEpNvG536yzCnKmE7IN3cGzIJ/view?usp=sharing Here's the apple music effect video: https://drive.google.com/file/d/1c1Kdgkn2nhy8SdDvRJAFF2sPvqJ8fL48/view?usp=sharing We want to improve our user experience. How can we do that?
0
0
479
Jul ’24
AVSpeechSynthesizer doesn't notifyOthersOnDeactivation
Hello, I am building a new iOS app which uses AVSpeechSynthesizer and should be able to mix audio nicely with audio from other apps. AVSpeechSynthesizer seems to handle setting the AVAudioSession to active on it's own, but does not deactivate the audio session. This leads to issues, namely that other audio sources remain "ducked" after AVSpeechSynthesizer is done speaking. I have implemented deactivating the audio session myself, which "works", in that it allows other audio sources to become "un-ducked", but it throws this exception each time even though it appears successful. Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} It appears to be a bug with how AVSpeechSynthesizer handles activating/deactivating the audio session. Below is a minimal example which illustrates the problem. It has two buttons, one which manually deactivates the audio sessions, which throws the exception, but otherwise works, and another button which leaves audio session management to the AVSpeechSynthesizer but does not "un-duck" other audio. If you play some audio from another app (ex: Music), you'll see the button which throws/catches an exception successfully ducks/un-ducks the audio, while the one without attempting to deactivate the session ducks but does not un-duck the audio. import AVFoundation struct ContentView: View { let workingSynthesizer = UnduckingSpeechSynthesizer() let brokenSynthesizer = BrokenSpeechSynthesizer() init() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .voicePrompt, options: [.duckOthers]) } catch { print("Setup error info: \(error)") } } var body: some View { VStack { Button("Works Correctly"){ workingSynthesizer.speak(text: "Hello planet") } Text("-------") Button("Does not work"){ brokenSynthesizer.speak(text: "Hello planet") } } .padding() } } class UnduckingSpeechSynthesizer: NSObject { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() override init(){ super.init() synth.delegate = self } func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } extension UnduckingSpeechSynthesizer: AVSpeechSynthesizerDelegate { func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { do { try audioSession.setActive(false, options: .notifyOthersOnDeactivation) } catch { // always throws an error // Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} print("Deactivate error info: \(error)") } } } class BrokenSpeechSynthesizer { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } (I have a separate issue where the first speech attempt takes a few seconds but I don't think it's related)
6
2
1.4k
Jul ’24
Understanding AVAudioTime in AVAudioNodeTapBlock? Is there a way to get time relative to a scheduled Buffer?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
3
0
1.4k
Dec ’24
After unholding CallKit, the audio does not restore.
In my application, I use CallKit and have supportsHolding = true set. During my phone call, another call comes in (e.g., GSM). I accept the incoming call and put the current call on hold. If I end the active call myself, everything is fine, and CallKit calls the method provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession). However, if the other party ends the call, the second call remains on hold. In the application, the user clicks on unhold, and I notify CallKit that the hold has ended. But in this case, the didActivate method is not called at all. If I try to activate the audio myself after unhold, I receive the error: Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} AVAudioSessionErrorInsufficientPriority == NSOSStatusErrorDomain Code: 561017449 What needs to be done for CallKit to activate my audio?
3
1
1.4k
Feb ’25
AudioQueue error 561145187
our app meet a wired problem for online version. more and more user get 561145187 when try to call this code: AudioQueueNewInput(&self->_recordFormat, inputBufferHandler, (__bridge void *)(self), NULL, NULL, 0, &self->_audioQueue)" I search for several weeks, but nothing help. we sum up all issues devices, found some similarity: only happens on iPad OS 14.0 + occurred when app started or wake from background (we call the code when app received "UIApplicationDidBecomeActiveNotification") Any Idea why this happens?
1
0
1.1k
Nov ’24