Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Save MPEG-TS (h264 or HEVC) video stream using AVAssetWriter.
I'm capturing video stream from GoPro camera (I demux UDP MPEG-TS packets) and create CMSampleBuffers from them, this works fine when I display them using CMSampleBufferLayer. However when I dump them to disk using AVAssetWriter and then playback it with AVPlayer, AVPlayer has problems with scrubbing, it also cannot render previous frames, it needs to go back to key frames. Also thumbnails generated with AVAssetImageGenerator are mostly distorted and green, even though I set the requestedTimeToleranceAfter longer than the key frames frequency. When I re-encode saved video once again with AVAssetExportSession and play it back then I can scrub the video just fine. Is it because re-transcoding adds additional metadata to enable generating frames when rewinding the video and scrubbing? If so is there a way to achieve it with AVAssetWriter without much time penalty? I need the dump/save operation to be very fast. I also considered the following: Instead of de-muxing video and creating CMSampleBuffers, maybe I could directly dump the stream to disk and somehow add moov atoms with timing information. Would this approach work? If so where I can find information how to do it? Thank you!
3
0
191
Apr ’25
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
3
0
579
Jan ’26
Why does CADisplayLink of an external UIScreen drift in time?
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable. I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:). The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below). Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely. Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same). When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected. Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else? Can the drifting be stopped? func onDisplayLinkUpdate(displayLink: CADisplayLink) { // Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime() }
3
0
476
Aug ’25
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
3
1
1.4k
Oct ’25
Apple watch for child
Hi, my daughter was given an Apple Watch due to her grandfather's passing. It is not GPS/cellular, and we cannot connect it to her Apple ID because of this. It doesn't seem right to leave her logged in to his Apple ID, but we are currently out of options. Is there a workaround to this? When I try to set her up from my phone, it tells me that the watch must have GPS/cellular to set it up. Why? Am I missing something?
3
0
171
Jun ’25
TypeScript definitions for MusicKit JS
I take that MusicKit JS is built with TypeScript, based on the attributions in the script: https://js-cdn.music.apple.com/musickit/v3/musickit.js In the script it points to https://js-cdn.music.apple.com/musickit/v1/acknowledgements.txt – I assume this should be the v3 URL for the v3 version? It returns the same content nonetheless. This contains attributions for TypeScript. Currently there's a third-party effort with DefinitelyTyped, which publishes the NPM package @types/musickit-js. The latest supported version available is v1. However, there is no version compatible with v3. This makes it hard to use MusicKit JS v3 in a TypeScript project. Please publish the types, ideally on the CDN along with the musickit.js file. Also consider publishing an officially Apple supported DefinitelyTyped package, or help to maintain the existing @types/musickit-js to make consuming this even easier.
3
1
206
Jun ’25
Background Upload Extension
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application. 1 - We implemented the code to upload still images. We would like to do the same for Live Photos but we are not sure how to proceed since a Live Photo is composed of 2 resources that we would like to upload in the same job so that we keep the association between the 2 resources. Steps: A Live Photo is captured on the device. Our application is notified for new content in the Photo Library and the asset is queued for upload by our application. The system calls the background upload extension and the Live Photo is prepared for upload but we can schedule a job only for one resource (still photo or paired video) so we can not upload both resources in a single job. 2 - Is there a way to synchronise the application and the extension so that the application would not process the same data or execute the same requests than the extension. For example: our application is working with tokens and we would like to prevent those tokens to be consumed by the application and the extension at the same time. 3 - is there a command in xcode or terminal to start or stop the extension, something similar to what exists for processing tasks (https://aninterestingwebsite.com/documentation/backgroundtasks/starting-and-terminating-tasks-during-development).
3
1
643
Feb ’26
Question about PT Framework channel tone behaviour
I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation. During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case. I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
3
0
418
Oct ’25
Mac OS Tahoe 26.0 (25A354) Sound Glitches When opening the simulator app
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 . the issue: when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise. a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open. I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
3
5
1.3k
Oct ’25
Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
3
0
958
Oct ’25
Playback Issues for DRM content when sending CMCD
Since iOS and tvOS 18, CMCD can now be automatically sent by AVPlayer (https://aninterestingwebsite.com/streaming/Whats-new-HLS.pdf). However, after enabling CMCD, our streams occasionally fail with the following error: CoreMediaErrorDomain Error -17383 This issue appears to affect only DRM-protected (FairPlay) streams so far. We activate CMCD via the resource loader of an AVURLAsset, before assigning the item to an AVPlayer. Unfortunately, we haven’t found a reliable way to reproduce the issue, and we’ve been unable to gather any useful diagnostic information. Has anyone else observed this behavior when enabling CMCD on FairPlay streams?
3
0
581
Oct ’25
Photos are captured with incorrect exposure bias in specific scenarios on iPhone 17 Pro
Hey, There seems to be an inconsistency when capturing a photo using QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced. See below: with .Quality with .Balanced This does not happen on the other lenses. I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful. Thanks, Alex
3
0
552
Dec ’25
Email "Updates to FairPlay Streaming Server SDK" makes no sense
Hi, While I don't normally use FairPlay, I got this email that is so strangely worded I am wondering if it makes sense to people who do know it or if it has some typo or what. You can only generate certificates for SDK 4, also SDK 4 is no longer supported? (Also "will not expire" is imprecise phrasing, certificates presumably will expire, I think it meant to say are still valid / are not invalidated.)
3
0
380
Oct ’25
Launch The Main App from LockedCameraCapture
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app. CameraViewController: func settingsButtonTapped() { #if isLockedCameraCaptureExtension //App is launched from Lock Screen //Launch main app here... #else //App is launched from Home Screen self.showSettings(animated: true) #endif } In this document: https://aninterestingwebsite.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen Apple asks you to use: func launchApp(with session: LockedCameraCaptureSession, info: String) { Task { do { let activity = NSUserActivityTypeLockedCameraCapture activity.userInfo = [UserInfoKey: info] try await session.openApplication(for: activity) } catch { StatusManager.displayError("Unable to open app - \(error.localizedDescription)") } } } However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
3
0
583
Nov ’25
Capturing 48mp photos with .builtInTripleCamera
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp? Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior. New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
3
0
452
Sep ’25
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
3
2
654
Jan ’26
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
3
2
864
2w
Save MPEG-TS (h264 or HEVC) video stream using AVAssetWriter.
I'm capturing video stream from GoPro camera (I demux UDP MPEG-TS packets) and create CMSampleBuffers from them, this works fine when I display them using CMSampleBufferLayer. However when I dump them to disk using AVAssetWriter and then playback it with AVPlayer, AVPlayer has problems with scrubbing, it also cannot render previous frames, it needs to go back to key frames. Also thumbnails generated with AVAssetImageGenerator are mostly distorted and green, even though I set the requestedTimeToleranceAfter longer than the key frames frequency. When I re-encode saved video once again with AVAssetExportSession and play it back then I can scrub the video just fine. Is it because re-transcoding adds additional metadata to enable generating frames when rewinding the video and scrubbing? If so is there a way to achieve it with AVAssetWriter without much time penalty? I need the dump/save operation to be very fast. I also considered the following: Instead of de-muxing video and creating CMSampleBuffers, maybe I could directly dump the stream to disk and somehow add moov atoms with timing information. Would this approach work? If so where I can find information how to do it? Thank you!
Replies
3
Boosts
0
Views
191
Activity
Apr ’25
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
Replies
3
Boosts
0
Views
579
Activity
Jan ’26
Why does CADisplayLink of an external UIScreen drift in time?
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable. I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:). The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below). Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely. Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same). When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected. Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else? Can the drifting be stopped? func onDisplayLinkUpdate(displayLink: CADisplayLink) { // Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime() }
Replies
3
Boosts
0
Views
476
Activity
Aug ’25
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
Replies
3
Boosts
1
Views
1.4k
Activity
Oct ’25
Apple watch for child
Hi, my daughter was given an Apple Watch due to her grandfather's passing. It is not GPS/cellular, and we cannot connect it to her Apple ID because of this. It doesn't seem right to leave her logged in to his Apple ID, but we are currently out of options. Is there a workaround to this? When I try to set her up from my phone, it tells me that the watch must have GPS/cellular to set it up. Why? Am I missing something?
Replies
3
Boosts
0
Views
171
Activity
Jun ’25
TypeScript definitions for MusicKit JS
I take that MusicKit JS is built with TypeScript, based on the attributions in the script: https://js-cdn.music.apple.com/musickit/v3/musickit.js In the script it points to https://js-cdn.music.apple.com/musickit/v1/acknowledgements.txt – I assume this should be the v3 URL for the v3 version? It returns the same content nonetheless. This contains attributions for TypeScript. Currently there's a third-party effort with DefinitelyTyped, which publishes the NPM package @types/musickit-js. The latest supported version available is v1. However, there is no version compatible with v3. This makes it hard to use MusicKit JS v3 in a TypeScript project. Please publish the types, ideally on the CDN along with the musickit.js file. Also consider publishing an officially Apple supported DefinitelyTyped package, or help to maintain the existing @types/musickit-js to make consuming this even easier.
Replies
3
Boosts
1
Views
206
Activity
Jun ’25
Background Upload Extension
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application. 1 - We implemented the code to upload still images. We would like to do the same for Live Photos but we are not sure how to proceed since a Live Photo is composed of 2 resources that we would like to upload in the same job so that we keep the association between the 2 resources. Steps: A Live Photo is captured on the device. Our application is notified for new content in the Photo Library and the asset is queued for upload by our application. The system calls the background upload extension and the Live Photo is prepared for upload but we can schedule a job only for one resource (still photo or paired video) so we can not upload both resources in a single job. 2 - Is there a way to synchronise the application and the extension so that the application would not process the same data or execute the same requests than the extension. For example: our application is working with tokens and we would like to prevent those tokens to be consumed by the application and the extension at the same time. 3 - is there a command in xcode or terminal to start or stop the extension, something similar to what exists for processing tasks (https://aninterestingwebsite.com/documentation/backgroundtasks/starting-and-terminating-tasks-during-development).
Replies
3
Boosts
1
Views
643
Activity
Feb ’26
Question about PT Framework channel tone behaviour
I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation. During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case. I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
Replies
3
Boosts
0
Views
418
Activity
Oct ’25
Configuring capture pipeline with ProResRAW codec
I am unable to find any clearcut documentation on configuring AVCaptureSession pipeline to capture video with proResRAW codec type, which is 16 bit format. Is it supported only with AVCaptureMovieFileOutput or one can have AVCaptureVideoDataOutput emitting 16-bit sample buffers that can be vended to AVAssetWriter?
Replies
3
Boosts
0
Views
1.2k
Activity
Feb ’26
Mac OS Tahoe 26.0 (25A354) Sound Glitches When opening the simulator app
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 . the issue: when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise. a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open. I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
Replies
3
Boosts
5
Views
1.3k
Activity
Oct ’25
SpeechTranscriber on Simulator
I am trying to use SpeechTranscriber from Speech framework. Is it possible to use it on Simulator of iOS 26 (Mac OS Tahoe)? Function "supportedLocales" returns an empty array.
Replies
3
Boosts
2
Views
995
Activity
Nov ’25
Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
Replies
3
Boosts
0
Views
958
Activity
Oct ’25
Playback Issues for DRM content when sending CMCD
Since iOS and tvOS 18, CMCD can now be automatically sent by AVPlayer (https://aninterestingwebsite.com/streaming/Whats-new-HLS.pdf). However, after enabling CMCD, our streams occasionally fail with the following error: CoreMediaErrorDomain Error -17383 This issue appears to affect only DRM-protected (FairPlay) streams so far. We activate CMCD via the resource loader of an AVURLAsset, before assigning the item to an AVPlayer. Unfortunately, we haven’t found a reliable way to reproduce the issue, and we’ve been unable to gather any useful diagnostic information. Has anyone else observed this behavior when enabling CMCD on FairPlay streams?
Replies
3
Boosts
0
Views
581
Activity
Oct ’25
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15?
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15? If so, are there any specific hardware or software requirements to enable this functionality?
Replies
3
Boosts
1
Views
200
Activity
Apr ’25
Photos are captured with incorrect exposure bias in specific scenarios on iPhone 17 Pro
Hey, There seems to be an inconsistency when capturing a photo using QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced. See below: with .Quality with .Balanced This does not happen on the other lenses. I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful. Thanks, Alex
Replies
3
Boosts
0
Views
552
Activity
Dec ’25
Email "Updates to FairPlay Streaming Server SDK" makes no sense
Hi, While I don't normally use FairPlay, I got this email that is so strangely worded I am wondering if it makes sense to people who do know it or if it has some typo or what. You can only generate certificates for SDK 4, also SDK 4 is no longer supported? (Also "will not expire" is imprecise phrasing, certificates presumably will expire, I think it meant to say are still valid / are not invalidated.)
Replies
3
Boosts
0
Views
380
Activity
Oct ’25
Launch The Main App from LockedCameraCapture
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app. CameraViewController: func settingsButtonTapped() { #if isLockedCameraCaptureExtension //App is launched from Lock Screen //Launch main app here... #else //App is launched from Home Screen self.showSettings(animated: true) #endif } In this document: https://aninterestingwebsite.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen Apple asks you to use: func launchApp(with session: LockedCameraCaptureSession, info: String) { Task { do { let activity = NSUserActivityTypeLockedCameraCapture activity.userInfo = [UserInfoKey: info] try await session.openApplication(for: activity) } catch { StatusManager.displayError("Unable to open app - \(error.localizedDescription)") } } } However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
Replies
3
Boosts
0
Views
583
Activity
Nov ’25
Capturing 48mp photos with .builtInTripleCamera
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp? Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior. New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
Replies
3
Boosts
0
Views
452
Activity
Sep ’25
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
Replies
3
Boosts
2
Views
654
Activity
Jan ’26
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
Replies
3
Boosts
2
Views
864
Activity
2w