Streaming

RSS for tag

Deep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.

Streaming Documentation

Posts under Streaming subtopic

Post

Replies

Boosts

Views

Activity

SCK/replayd behaviors and delays
We're troubleshooting SCK issues. They occur with a relatively small amount of sessions, but lack of context and/or ability to advise the customer on how they could make behavior more predictable and reliable is problematic. Generally, there is 2 distinct issues which may or may not have the same root cause: Failure to establish SCK session. Usually manifests within the app as SCShareableContent.getWithCompletionHandler call either never invoking the completion handler, or taking prohibitively long time (we usually give it 3-10 sec before giving up). In the system log it may look like this: (log omitted - suspecting it triggers the content filter) Note the 6 seconds delay to completion of fetchShareableContentWithOption (normally it's a 30-40ms operation). Sometime, we'd see the stream established, but some minutes (or even hours) into the recording we'd stop receiving frames. Both scenarios are likely to occur when the disk space is low, with reliable repro of the problem #2 at below 8gb of free space (in that case, we've seen replayd silently dropping the session, without ever notifying the client ... improving API could go a long way there). However, out of recent occurrences, while most have less than 100GB available, we've seen it on machines with as much as 500GB free. Unfortunately, it's almost never reproducible in dev environment, so we have to rely on diagnostics we're able to collect in the field -- which nothing obvious yet. I'd like to understand the root cause of both scenarios better and/or how what specific frameworks can cause these behaviors.
0
0
560
Jan ’26
Inquiry about Low-Latency Frame Interpolation & Super Resolution using VTFrameProcessor
Hello, I have implemented Low-Latency Frame Interpolation using the VTFrameProcessor framework, based on the sample code from https://aninterestingwebsite.com/kr/videos/play/wwdc2025/300. It is currently working well for both LIVE and VOD streams. However, I have a few questions regarding the lifecycle management and synchronization of this feature: 1. Common Questions (Applicable to both Frame Interpolation & Super Resolution) 1.1 Dynamic Toggling Do you recommend enabling/disabling these features dynamically during playback? Or is it better practice to configure them only during the initial setup/preparation phase? If dynamic toggling is supported, are there any recommended patterns for managing VTFrameProcessor session lifecycle (e.g., startSession / endSession timing)? 1.2 Synchronization Method I am currently using CADisplayLink to fetch frames from AVPlayerItemVideoOutput and perform processing. Is CADisplayLink the recommended approach for real-time frame acquisition with VTFrameProcessor? If the feature needs to be toggled on/off during active playback, are there any concerns or alternative approaches you would recommend? 1.3 Supported Resolution/Quality Range What are the minimum and maximum video resolutions supported for each feature? Are there any aspect ratio restrictions (e.g., does it support 1:1 square videos)? Is there a recommended resolution range for optimal performance and quality? 2. Frame Interpolation Specific Questions 2.1 LIVE Stream Support Is Low-Latency Frame Interpolation suitable for LIVE streaming scenarios where latency is critical? Are there any special considerations for LIVE vs VOD? 3. Super Resolution Specific Questions 3.1 Adaptive Bitrate (ABR) Stream Support In ABR (HLS/DASH) streams, the video resolution can change dynamically during playback. Is VTLowLatencySuperResolutionScaler compatible with ABR streams where resolution changes mid-playback? If resolution changes occur, should I recreate the VTLowLatencySuperResolutionScalerConfiguration and restart the session, or does the API handle this automatically? 3.2 Small/Square Resolution Issue I observed that 144x144 (1:1 square) videos fail with error:   "VTFrameProcessorErrorDomain Code=-19730: processWithSourceFrame within VCPFrameSuperResolutionProcessor failed" However, 480x270 (16:9) videos work correctly. minimumDimensions reports 96x96, but 144x144 still fails. Is there an undocumented restriction on aspect ratio or a practical minimum resolution? 3.3 Scale Factor Selection supportedScaleFactors returns [2.0, 4.0] for most resolutions. Is there a recommended scale factor for balancing quality and performance? Are there scenarios where 4.0x should be avoided? The documentation on this specific topic seems limited, so I would appreciate any insights or advice. Thank you.
0
0
444
Feb ’26
A mistake in FairPlay Streaming SDK 26 sample code on comparing ProtocolVersionUsed value?
Hello, I am reviewing the sample codes of FairPlay Streaming SDK 26 and there was a place where I think is a mistake. The codes are for the server, for both Swift and Rust codes. There is an if statement which compares "ProtocolVersionUsed"(spcData.versionUsed) and SPCVersion1 constant, though "ProtocolVersionUsed" and SPC Version is a different thing, so shouldn't it be using a different constant value? [createContentKeyPayload.swift] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if serverCtx.spcContainer.spcData.versionUsed == base_constants.SPCVersion.v1.rawValue && [createContentKeyPayload.rs] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if (serverCtx.spcContainer.spcData.versionUsed == SPCVersion::v1 as u32) && Thank you.
0
0
102
Feb ’26
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
0
0
214
Mar ’26
Generating a new FPS certificate (SDK 26) alongside an existing SDK 4 certificate
Hi, Our client currently has an FPS deployment certificate generated with SDK version 4 that is still actively used in production. They would like to generate an additional certificate using SDK version 26. Before doing so, they just want to confirm: Will the existing SDK 4 certificate remain unaffected and still visible in the Apple Developer portal? Any considerations they should keep in mind? Thanks!
0
0
139
2w
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
0
0
31
2d
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
0
0
43
2d
SCK/replayd behaviors and delays
We're troubleshooting SCK issues. They occur with a relatively small amount of sessions, but lack of context and/or ability to advise the customer on how they could make behavior more predictable and reliable is problematic. Generally, there is 2 distinct issues which may or may not have the same root cause: Failure to establish SCK session. Usually manifests within the app as SCShareableContent.getWithCompletionHandler call either never invoking the completion handler, or taking prohibitively long time (we usually give it 3-10 sec before giving up). In the system log it may look like this: (log omitted - suspecting it triggers the content filter) Note the 6 seconds delay to completion of fetchShareableContentWithOption (normally it's a 30-40ms operation). Sometime, we'd see the stream established, but some minutes (or even hours) into the recording we'd stop receiving frames. Both scenarios are likely to occur when the disk space is low, with reliable repro of the problem #2 at below 8gb of free space (in that case, we've seen replayd silently dropping the session, without ever notifying the client ... improving API could go a long way there). However, out of recent occurrences, while most have less than 100GB available, we've seen it on machines with as much as 500GB free. Unfortunately, it's almost never reproducible in dev environment, so we have to rely on diagnostics we're able to collect in the field -- which nothing obvious yet. I'd like to understand the root cause of both scenarios better and/or how what specific frameworks can cause these behaviors.
Replies
0
Boosts
0
Views
560
Activity
Jan ’26
Apple Vision Pro streaming spatial video transmission
I want develop an app for real-time streaming spatial video transmission from an Apple Vision Pro to another Apple Vision Pro and play, like MV-HEVC, does it's possible? If it's possible how to make it?
Replies
0
Boosts
0
Views
140
Activity
Jan ’26
Inquiry about Low-Latency Frame Interpolation & Super Resolution using VTFrameProcessor
Hello, I have implemented Low-Latency Frame Interpolation using the VTFrameProcessor framework, based on the sample code from https://aninterestingwebsite.com/kr/videos/play/wwdc2025/300. It is currently working well for both LIVE and VOD streams. However, I have a few questions regarding the lifecycle management and synchronization of this feature: 1. Common Questions (Applicable to both Frame Interpolation & Super Resolution) 1.1 Dynamic Toggling Do you recommend enabling/disabling these features dynamically during playback? Or is it better practice to configure them only during the initial setup/preparation phase? If dynamic toggling is supported, are there any recommended patterns for managing VTFrameProcessor session lifecycle (e.g., startSession / endSession timing)? 1.2 Synchronization Method I am currently using CADisplayLink to fetch frames from AVPlayerItemVideoOutput and perform processing. Is CADisplayLink the recommended approach for real-time frame acquisition with VTFrameProcessor? If the feature needs to be toggled on/off during active playback, are there any concerns or alternative approaches you would recommend? 1.3 Supported Resolution/Quality Range What are the minimum and maximum video resolutions supported for each feature? Are there any aspect ratio restrictions (e.g., does it support 1:1 square videos)? Is there a recommended resolution range for optimal performance and quality? 2. Frame Interpolation Specific Questions 2.1 LIVE Stream Support Is Low-Latency Frame Interpolation suitable for LIVE streaming scenarios where latency is critical? Are there any special considerations for LIVE vs VOD? 3. Super Resolution Specific Questions 3.1 Adaptive Bitrate (ABR) Stream Support In ABR (HLS/DASH) streams, the video resolution can change dynamically during playback. Is VTLowLatencySuperResolutionScaler compatible with ABR streams where resolution changes mid-playback? If resolution changes occur, should I recreate the VTLowLatencySuperResolutionScalerConfiguration and restart the session, or does the API handle this automatically? 3.2 Small/Square Resolution Issue I observed that 144x144 (1:1 square) videos fail with error:   "VTFrameProcessorErrorDomain Code=-19730: processWithSourceFrame within VCPFrameSuperResolutionProcessor failed" However, 480x270 (16:9) videos work correctly. minimumDimensions reports 96x96, but 144x144 still fails. Is there an undocumented restriction on aspect ratio or a practical minimum resolution? 3.3 Scale Factor Selection supportedScaleFactors returns [2.0, 4.0] for most resolutions. Is there a recommended scale factor for balancing quality and performance? Are there scenarios where 4.0x should be avoided? The documentation on this specific topic seems limited, so I would appreciate any insights or advice. Thank you.
Replies
0
Boosts
0
Views
444
Activity
Feb ’26
A mistake in FairPlay Streaming SDK 26 sample code on comparing ProtocolVersionUsed value?
Hello, I am reviewing the sample codes of FairPlay Streaming SDK 26 and there was a place where I think is a mistake. The codes are for the server, for both Swift and Rust codes. There is an if statement which compares "ProtocolVersionUsed"(spcData.versionUsed) and SPCVersion1 constant, though "ProtocolVersionUsed" and SPC Version is a different thing, so shouldn't it be using a different constant value? [createContentKeyPayload.swift] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if serverCtx.spcContainer.spcData.versionUsed == base_constants.SPCVersion.v1.rawValue && [createContentKeyPayload.rs] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if (serverCtx.spcContainer.spcData.versionUsed == SPCVersion::v1 as u32) && Thank you.
Replies
0
Boosts
0
Views
102
Activity
Feb ’26
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
Replies
0
Boosts
0
Views
214
Activity
Mar ’26
offline with FairPlay
I am working on offline license support for FairPlay. I am trying to identify offline vs. live streaming requests. How do I do that? Is there any way to identify the request type (offline or streaming) from SPC?
Replies
0
Boosts
1
Views
78
Activity
3w
Generating a new FPS certificate (SDK 26) alongside an existing SDK 4 certificate
Hi, Our client currently has an FPS deployment certificate generated with SDK version 4 that is still actively used in production. They would like to generate an additional certificate using SDK version 26. Before doing so, they just want to confirm: Will the existing SDK 4 certificate remain unaffected and still visible in the Apple Developer portal? Any considerations they should keep in mind? Thanks!
Replies
0
Boosts
0
Views
139
Activity
2w
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
Replies
0
Boosts
0
Views
31
Activity
2d
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
Replies
0
Boosts
0
Views
43
Activity
2d