Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Adding HTTP headers to a AVURLAsset
Hi, for the implementation of an audio player with signed URL's, I need to be able to set an authorization header to the request for an AVURLAsset. This works but not on Airplay when trying to stream multiple songs in a queue. For each item I do: let headerFields: [String: String] = ["Authorization": getIdToken()!] super.init(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headerFields]) But only the first 2 songs in the queue actually get this authorization header sent along, somehow it is removed for subsequent songs. Any ideas on how I can fix this? thanks, Thomas
1
1
1.7k
Jan ’21
What causes "issue_type = overload" in coreaudiod with USB audio interface?
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below). The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload." I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist: https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105 I've also opened a Feedback Assistant ticket (FB9037528): https://feedbackassistant.apple.com/feedback/9037528 Does anyone know what could be causing this issue? Thanks for any help. Cheers, Flux aka Andy. Hardware Overview:  Model Name: MacBook Pro  Model Identifier: MacBookPro16,1  Processor Name: 8-Core Intel Core i9  Processor Speed: 2.4 GHz  Number of Processors: 1  Total Number of Cores: 8  L2 Cache (per Core): 256 KB  L3 Cache: 16 MB  Hyper-Threading Technology: Enabled  Memory: 64 GB  System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0) System Software Overview: System Version: macOS 11.2.3 (20D91)  Kernel Version: Darwin 20.3.0  Boot Volume: Macintosh HD  Boot Mode: Normal  Computer Name: mycomputername  User Name: myusername  Secure Virtual Memory: Enabled  System Integrity Protection: Enabled USB interface: Denon DJ DS1 Snippet of Console logs error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7 default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644 ... default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no default 21:07:05.102926-0500 coreaudiod   CAReportingClient.mm:508  message {   HostApplicationDisplayID = "com.apple.Music";   cause = Unknown;   deadline = 2615019;   "input_device_source_list" = Unknown;   "input_device_transport_list" = USB;   "input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";   "io_buffer_size" = 512;   "io_cycle" = 1;   "is_prewarming" = 0;   "is_recovering" = 0;   "issue_type" = overload;   lateness = "-535";   "output_device_source_list" = Unknown;   "output_device_transport_list" = USB;   "output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2"; }: (null)
39
13
21k
Apr ’21
AudioQueue error 561145187
our app meet a wired problem for online version. more and more user get 561145187 when try to call this code: AudioQueueNewInput(&self->_recordFormat, inputBufferHandler, (__bridge void *)(self), NULL, NULL, 0, &self->_audioQueue)" I search for several weeks, but nothing help. we sum up all issues devices, found some similarity: only happens on iPad OS 14.0 + occurred when app started or wake from background (we call the code when app received "UIApplicationDidBecomeActiveNotification") Any Idea why this happens?
1
0
1.1k
Sep ’21
AVAudioPCMBuffer Memory Management
I’m using AVAudioEngine to get a stream of AVAudioPCMBuffers from the device’s microphone using the usual installTap(onBus:) setup. To distribute the audio stream to other parts of the program, I’m sending the buffers to a Combine publisher similar to the following: private let publisher = PassthroughSubject<AVAudioPCMBuffer, Never>() I’m starting to suspect I have some kind of concurrency or memory management issue with the buffers, because when consuming the buffers elsewhere I’m getting a range of crashes that suggest some internal pointer in a buffer is NULL (specifically, I’m seeing crashes in vDSP.convertElements(of:to:) when I try to read samples from the buffer). These crashes are in production and fairly rare — I can’t reproduce them locally. I never modify the audio buffers, only read them for analysis. My question is: should it be possible to put AVAudioPCMBuffers into a Combine pipeline? Does the AVAudioPCMBuffer class not retain/release the underlying AudioBufferList’s memory the way I’m assuming? Is this a fundamentally flawed approach?
1
1
1.4k
Oct ’21
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
1
1
761
Feb ’22
Audio / Video sync issue on iOS using AVSampleBufferRenderSynchronizer
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers: an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers, and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers. The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image. Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized. However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time. On the other hand, with the same player code and network streams on macOS, the synchronization always works fine. This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again. So, any help / hints on this sync problem will be greatly appreciated! :)
2
0
1.1k
May ’23
storing AVAsset in SwiftData
Hi, I am creating an app that can include videos or images in it's data. While @Attribute(.externalStorage) helps with images, with AVAssets I actually would like access to the URL behind that data. (as it would be stupid to load and then save the data again just to have a URL) One key component is to keep all of this clean enough so that I can use (private) CloudKit syncing with the resulting model. All the best Christoph
1
0
522
Oct ’23
Detect when (internal or external) microphone is being used
Hello, I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used. My code works well for the internal microphone, and for microphones which are connected using a cable. External microphones which are connected using bluetooth are not reporting their status. The status is always requested successfully, but it is always reported as inactive. Main relevant parts in my code : static inline AudioObjectPropertyAddress makeGlobalPropertyAddress(AudioObjectPropertySelector selector) { AudioObjectPropertyAddress address = { selector, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster, }; return address; } static BOOL getBoolProperty(AudioDeviceID deviceID, AudioObjectPropertySelector selector) { AudioObjectPropertyAddress const address = makeGlobalPropertyAddress(selector); UInt32 prop; UInt32 propSize = sizeof(prop); OSStatus const status = AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop); if (status != noErr) { return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status. } return static_cast<BOOL>(prop == 1); } ... __block BOOL microphoneActive = NO; iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) { if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) != 0) { microphoneActive = YES; *stop = YES; } }); What could cause this and how could it be fixed? Thank you for your help in advance!
3
0
1.3k
Nov ’23
AudioDriverKit IOUserAudioIOOperationWriteEnd avoid for input loopback
Dear Sirs, I've written an audio driver based on AudioDriverKit. In my audio callback function I'm receiving calls with io operation IOUserAudioIOOperationWriteEnd and IOUserAudioIOOperationBeginRead as expected which means I see IOUserAudioIOOperationWriteEnd operations during a playback in an application like VLC or the browser and I see IOUserAudioIOOperationBeginRead when recording in Audacity etc.. But when I open the SystemSettings and goto Sound and I select my driver as input I also see calls with IOUserAudioIOOperationWriteEnd which seem to be the just read input data. I can also watch this when starting up Teams. I think the purpose is to add the (mic) input also to the output so you have the chance to listen to yourself. Nevertheless I'd like to fully avoid this but I don't see a way to distinguish between the playback audio data and the input audio data inside this callback. How could I do this? Or even better is there a switch which would completely switch off these callbacks which forward the input to the output? Thanks and best regards, Johannes
2
0
915
Feb ’24
Save/Restore properties on reboot in audio driver based on AudioDriverKit
Dear Sirs, when writing an AudioServerPlugin I can use the hosts WriteToStorage/CopyFromStorage functions to save and restore custom properties on restarting the machine. Are there corresponding functions for an audio driver based on AudioDriverKit? What would be the recommended way to save and restore properties so that they are available again after a reboot in an audio driver based on AudioDriverKit? Thanks and best regards, Johannes
1
0
775
Feb ’24
Should the MV-HEVC Encoder Support Multiple Passes?
As a straightforward example, I've taken Apple's MV-HEVC sample project and added two lines. First, after the AVAssetWriterInput is created: frameInput.performsMultiPassEncodingIfSupported = true Second, after the call to multiviewWriter.startWriting(): print("canPerformMultiplePasses: \(frameInput.canPerformMultiplePasses)") Which prints true. This leads me to believe that the first encoding pass should proceed as-normal (even though I haven't handled the logic for the completion of the first pass, etc.). However, I receive this error when the code attempts to appendTaggedBuffers to the AVAssetWriterInputTaggedPixelBufferGroupAdaptor: Fatal error: Failed to append tagged buffers to multiview output Am I missing a step? Or is the multi-pass encoding only supported for standard sample/pixel buffers (and not tagged buffers)?
1
3
743
Mar ’24
Understanding AVAudioTime in AVAudioNodeTapBlock? Is there a way to get time relative to a scheduled Buffer?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
3
0
1.4k
Apr ’24
AVAudioEngine Dolby Atmos
Hi! I have a music app using AVAudioEngine. Right now, I have set it up to play multi channel tracks and show "Multichannel" in the volume controls. However, I am unable to figure out how to get it to use Dolby Atmos. Is there something that needs to be enabled? Is it even possible for AVAudioEngine? I saw some apps that are able of playing with Dolby Atmos, but they do not have EQ feature, so I'm guessing that they are not using AVAudioEngine.
2
0
929
Apr ’24
iPad app on macOS not asking for microphone permission
Hello, I have an iOS app that is recording audio that is working fine on iPads/iPhones. It asks for microphone permission and after that recording works. I installed the same app on my M3 MacBook via TestFlight since iPad apps are supposed to work without a change that way. The app starts fine and everything, but it never asks for Microphone permission, so I can't record. Do I need to do something to make this happen (this is not macCatalyst, its running the arm64 iPhone binary on macOS) thanks
2
1
721
May ’24
AudioComponentInstanceNew takes up to five seconds to complete
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit: OSStatus status; AudioComponentDescription desc; AudioComponent inputComponent; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; inputComponent = AudioComponentFindNext(NULL, &desc); status = AudioComponentInstanceNew(inputComponent, &unit); We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns. We also see the following errors in the log multiple times: AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed and those right after (which I don't know if they matter to this issue): KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
3
1
1.1k
Jun ’24
io buffer sizes for audio driver based on IOUserAudioDevice
Dear Sirs, I've written an audio driver based on IOUserAudioDevice. In my IOOperationHandler I can receive and send the audio samples as expected. Is there any way to configure the number of samples transferred in each call? Currently it seem to be around 512 samples per call, which relates to 10.7 millisecs when operating on 48 kHz samplerate. I'd like to achieve something like 48 or 96 samples per call. I did some experiments and tried calls to SetOutputLatency() etc. but so far I didn't find the right way to change the in_io_buffer_frame_size in the callback. I'd like to do this as smaller buffer sizes would allow lower latencies for the subsequent audio processing. Thanks and best regards, Johannes
3
0
843
Jul ’24
AVSpeechSynthesizer doesn't notifyOthersOnDeactivation
Hello, I am building a new iOS app which uses AVSpeechSynthesizer and should be able to mix audio nicely with audio from other apps. AVSpeechSynthesizer seems to handle setting the AVAudioSession to active on it's own, but does not deactivate the audio session. This leads to issues, namely that other audio sources remain "ducked" after AVSpeechSynthesizer is done speaking. I have implemented deactivating the audio session myself, which "works", in that it allows other audio sources to become "un-ducked", but it throws this exception each time even though it appears successful. Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} It appears to be a bug with how AVSpeechSynthesizer handles activating/deactivating the audio session. Below is a minimal example which illustrates the problem. It has two buttons, one which manually deactivates the audio sessions, which throws the exception, but otherwise works, and another button which leaves audio session management to the AVSpeechSynthesizer but does not "un-duck" other audio. If you play some audio from another app (ex: Music), you'll see the button which throws/catches an exception successfully ducks/un-ducks the audio, while the one without attempting to deactivate the session ducks but does not un-duck the audio. import AVFoundation struct ContentView: View { let workingSynthesizer = UnduckingSpeechSynthesizer() let brokenSynthesizer = BrokenSpeechSynthesizer() init() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .voicePrompt, options: [.duckOthers]) } catch { print("Setup error info: \(error)") } } var body: some View { VStack { Button("Works Correctly"){ workingSynthesizer.speak(text: "Hello planet") } Text("-------") Button("Does not work"){ brokenSynthesizer.speak(text: "Hello planet") } } .padding() } } class UnduckingSpeechSynthesizer: NSObject { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() override init(){ super.init() synth.delegate = self } func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } extension UnduckingSpeechSynthesizer: AVSpeechSynthesizerDelegate { func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { do { try audioSession.setActive(false, options: .notifyOthersOnDeactivation) } catch { // always throws an error // Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} print("Deactivate error info: \(error)") } } } class BrokenSpeechSynthesizer { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } (I have a separate issue where the first speech attempt takes a few seconds but I don't think it's related)
6
2
1.4k
Jul ’24
MPMusicPlayerControllers nowPlayingItem not changing song
MPMusicPlayerControllers nowPlayingItem no longer seems to be able to change a song. The code use to work but seems to be broken on iOS 16, 17 and now the iOS 18 beta. When newSong is triggered, the song restarts but it does not change songs. Instead I get the following error: Failed to set now playing item error=<MPMusicPlayerControllerErrorDomain.5 "Unable to play item <MPConcreteMediaItem: 0x9e9f0ef70> 206357861099970620" {}>. The documentation seems to indicate I’m doing things correctly. class MusicPlayer { var songTwo: MPMediaItem? let player = MPMusicPlayerController.applicationMusicPlayer func start() async { await MPMediaLibrary.requestAuthorization() let myPlaylistsQuery = MPMediaQuery.playlists() let playlists = myPlaylistsQuery.collections!.filter { $0.items.count > 2} let playlist = playlists.first! let songOne = playlist.items.first! songTwo = playlist.items[1] player.setQueue(with: playlist) play(songOne) } func newSong() { guard let songTwo else { return } play(songTwo) } private func play(_ song: MPMediaItem) { player.stop() player.nowPlayingItem = song player.prepareToPlay() player.play() } }
2
1
768
Jul ’24