Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Request low-latency streaming for iOS/iPadOS
Just found out this key available for visionOS https://aninterestingwebsite.com/documentation/bundleresources/entitlements/com.apple.developer.low-latency-streaming It seems to keep video streaming from being interrupted by AWDL, our community needs it badly for self-hosted game streaming (PC to iPhone / iPad). Related apps: Moonlight / VoidLink / SteamLink. Can we expect this on iOS/iPadOS 26, or even iOS/iPadOS 18 ?
1
3
416
Nov ’25
VRAM not freeing in Elite Dangerous
So I've been trying out GPTK with Elite Dangerous Horizons game and it looks like from what I can tell. The VRAM keeps going up until it goes over the limit where it drops the FPS to 1-3 FPS and then crashes the game. From the Performance HUD I can see that it looks like when using GPTK, the VRAM usage just keeps climbing and I never saw it drop down at all. I did some limited testing, and from that I think I can conclude that it is probably not a VRAM leak, but it might be caching it. The reason for this is because I noticed that if I went back to the area that I've been before. It won't increase the VRAM usage. So either there is something wrong with the freeing VRAM memory part, or it could be that GPTK might not be reporting the right amount of VRAM available to use? So maybe that's why it keeps allocating VRAM until it went out of memory and crashed the game. Just to test, I did try running the game with DXVK+MoltenVK combo, and I can see that it works just fine. VRAM is being freed up when it's no longer used. Is this a known issue in some games?
12
3
1k
Apr ’25
PDFKit Page Rerender
I'm experiencing an issue with PDFKit where page.removeAnnotation(annotation) successfully removes the annotation from the page's data structure, but the PDFView no longer updates automatically to reflect the change visually. Issue Details: The annotation is removed (verified by checking page.annotations.count) The PDFView display doesn't refresh to show the removal This code was working correctly before and suddenly stopped working No code changes were made on my end
3
2
900
Dec ’25
GameCenter scores are not being posted to the leaderboard
Hello! Bare with me here, as there is a lot to explain! I am working on implementing a Game Center high score leaderboard into my game. I have looked around for examples of how to properly implement this code, but have come up short on finding much material. Therefore, I have tried implementing it myself based off information I found on apples documentation. Long story short, I am getting success printed when I update my score, but no scores are actually being posted (or at-least no scores are showing up on the Game Center leaderboard when opened). Before I show the code, one thing I have questioned is the fact that this game is still in development. In AppStoreConnect, the status of the leaderboard is "Not Live". Does this affect scores being posted? Onto the code. I have created a GameCenter class which handles getting the leaderboards and posting scores to a specific leaderboard. I will post the code in whole, and will discuss below what is happening. PLEASE VIEW ATTACHED TEXT TO SEE THE GAMECENTER CLASS! GameCenter class - https://aninterestingwebsite.com/forums/content/attachment/0dd6dca8-8131-44c8-b928-77b3578bd970 In a different GameScene, once the game is over, I request to post a new high score to Game Center with this line of code: GameCenter.shared.submitScore(id: GameCenterLeaderboards.HighScore.rawValue) Now onto the logic of my code. For the longest time I struggled to figure out how to submit a score. I figured out that in Xcode 12, they deprecated a lot of functions that previously worked for me. Not is seems that we have to load all leaderboards (or the ones we want). That is the purpose behind the leaderboards private variable in the Game Center class. On the start up of the app, I call authenticate player. Once this callback is reached, I call loadLeaderboards which will load the leaderboards for each string id in an enum that I have elsewhere. Each of these leaderboards will be created as a Leaderboard object, and saved in the private leaderboard array. This is so I have access to these leaderboards later when I want to submit a score. Once the game is over, I am calling submitScore with the leaderboard id I want to post to. Right now, I only have a high score, but in the future I may add a parameter to this with the value so it works for other leaderboards as well. Therefore, no value is passed in since I am pulling from local storage which holds the high score. submitScore will get the leaderboard from the private leaderboard array that has the same id as the one passed in. Once I get the correct leaderboard, I submit a score to that leaderboard. Once the callback is hit, I receive the output "Successfully submitted score to leaderboard". This looks promising, except for the fact that no score is actually posted. At startup, I am calling updatePlayerHighScore, which is not complete - but for the purpose of my point, retrieves the high score of the player from the leaderboard and is printing it out to the console. It is printing out (0), meaning that no score was posted. The last thing I have questions about is the context when submitting a score. According to the documentation, this seems to just be metadata that GameCenter does not care about, but rather something the developer can use. Therefore, I think I can cross this off as causing the problem. I believe I implemented this correctly, but for some reason, nothing is posting to the leaderboard. This was ALOT, but I wanted to make sure I got all my thoughts down. Any help on why this is NOT posting would be awesome! Thanks so much! Mark
9
2
5.3k
Oct ’25
Pink Screen with VideoMaterial in ARKit
Hi everyone, I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content. Here's a simplified version of my setup: func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity { let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight let screenPlane = MeshResource.generatePlane(width: width, depth: height) let videoMaterial: Material = createVideoMaterial(videoItem: video) let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial]) return videoScreenModel } func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial { let player = AVPlayer(playerItem: videoItem) let videoMaterial = VideoMaterial(avPlayer: player) player.play() return videoMaterial } Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it? Thanks in advance!
16
3
1.6k
May ’25
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
5
0
1.1k
Sep ’25
BlendShapes don’t animate while playing animation in RealityKit
Hi everyone, I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender). The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible. The exact same blend shape animation works perfectly when no animation is playing. In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit Still, as soon as an animation starts, the shape keys don’t animate anymore. Here’s the test project on GitHub that demonstrates the issue clearly: https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates? Thanks in advance for any insights.
3
3
346
Jul ’25
Is migrating from ARView to RealityView recommended?
We're using RealityKit to create a science education AR app for iOS, iPadOS, and visionOS. In the WWDC25 session video "Bring your SceneKit project to RealityKit" https://aninterestingwebsite.com/videos/play/wwdc2025/288 at 8:15, it's explained that when using RealityKit, RealityView should be used in all cases, whereas in the past, SceneKit required SCNView, SceneView, or ARSCNView, depending on an app's requirements. Because the initial development of our app on iOS predates iOS 18's RealityView, our app currently uses ARView to render RealityKit AR content on iOS and iPadOS. Is it recommended that we migrate to RealityView, or can we safely continue using our existing ARView implementation? We'd prefer to avoid unnecessary development cost. If migrating from ARView to RealityView is recommended, what specific benefits should we expect from this transition? Thank you.
1
2
314
Jun ’25
MTL4FXTemporalDenoisedScaler initialization
I’m trying to use MTL4FXTemporalDenoisedScaler, and I’m seeing a crash during initialization even with a very simple sample app. I created a minimal sample here: https://github.com/tatsuya-ogawa/MetalFXInitExample The exception is: NSException: "-[AGXG16XFamilyHeap baseObject]: unrecognized selector sent to instance ..." What I found is: • This works: descriptor.makeTemporalDenoisedScaler(device: device) • This crashes: descriptor.makeTemporalDenoisedScaler(device: device, compiler: metal4Compiler) So the issue seems to happen only with the Metal4FX version. For testing, I’m using an iPhone 15 Pro. According to the Metal Feature Set Tables, MetalFX denoised upscaling should be supported on Apple9 and later, so I believe the device itself should meet the requirements. Reference: https://aninterestingwebsite.com/metal/Metal-Feature-Set-Tables.pdf Has anyone seen this before, or knows what might be causing it? I’d appreciate any advice. Thanks.
0
2
84
2w
Roblox freezing on MacOS 26.1 Beta
After updating to MacOS 26.1 I encountered an issue that Roblox tends to freeze quite often for 10 - 60 seconds at most, this is really annoying that it is doing this as i play the game a lot. My theory is that it is like a driver issue with metal or something, I have reinstalled MacOS, reinstalled the game and lowed the performance manually but nothing is working. Wondering if you could help, when it will be fixed and if others are having the same issue. Many thanks, William.
4
2
987
Oct ’25
SpriteKit scene used as SCNView.overlaySKScene crashes due to SKShapeNode
I recently published my first game on the App Store. It uses SceneKit with a SpriteKit overlay. All crashes Xcode downloaded for it so far are related to some SpriteKit/SceneKit internals. The most common crash is caused by SKCShapeNode::_NEW_copyRenderPathData. What could cause such a crash? crash.crash While developing this game (and the BoardGameKit framework that appears in the crash log) over the years I experienced many crashes presumably caused by the SpriteKit overlay (I opened a post SceneKit app randomly crashes with EXC_BAD_ACCESS in jet_context::set_fragment_texture about such a crash in September 2024), and other people on the internet also mention that they experience crashes when using SpriteKit as a SceneKit overlay. Should I use a separate SKView and lay it on top of SCNView rather than setting SCNView.overlaySKScene? That seemed to solve the crashes for a guy on stackoverflow, but is it also encouraged by Apple? I know SceneKit is deprecated, but according to Apple critical bugs would still be fixed. Could this be considered a critical bug?
4
0
777
Jan ’26
Spatial Scene API for iOS Apps
As part of the WWDC25 Keynote, a technology was announced that can present 2D images as 3D spatial scenes. This announcement is supported by a Press Release. ...developers can use the Spatial Scene API to make their app experience even more immersive. Zillow is taking advantage of the API for their Zillow Immersive app, allowing users to see images of homes and apartments with the rich depth and dimension that spatial scenes offer. The feature also appears in the Photos App on iOS 26 Developer Beta 1. Tapping "Spatial Scene" on any photo opens a view of that photo with a parallax effect. I've searched the WWDC sessions and new documentation and have come up short. Reaching out here for help. Is there any documentation for Spatial Scene API? Or any guidance on how to implement the spatial scene in iOS?
1
2
364
Jun ’25
Metal debug log in Swift Package
My goal is to print a debug message from a shader. I follow the guide that orders to set -fmetal-enable-logging metal compiler flag and following environment variables: MTL_LOG_LEVEL=MTLLogLevelDebug MTL_LOG_BUFFER_SIZE=2048 MTL_LOG_TO_STDERR=1 However there's an issue with the guide, it's only covers Xcode project setup, however I'm working on a Swift Package. It has a Metal-only target that's included into main target like this: targets: [ // A separate target for shaders. .target( name: "MetalShaders", resources: [ .process("Metal") ], plugins: [ // https://github.com/schwa/MetalCompilerPlugin .plugin(name: "MetalCompilerPlugin", package: "MetalCompilerPlugin") ] ), // Main target .target( name: "MegApp", dependencies: ["MetalShaders"] ), .testTarget( name: "MegAppTests", dependencies: [ "MegApp", "MetalShaders", ] ] So to apply compiler flag I use MetalCompilerPlugin which emits debug.metallib, it also allows to define DEBUG macro for shaders. This code compiles: #ifdef DEBUG logger.log_error("Hello There!"); os_log_default.log_debug("Hello thread: %d", gid); // this proves that code exectutes result.flag = true; #endif Environment is set via .xctestplan and valideted to work with ProcessInfo. However, nothing is printed to Xcode console nor to Console app. In attempt to fix it I'm trying to setup a MTLLogState, however the makeLogState(descriptor:) fails with error: if #available(iOS 18.0, *) { let logDescriptor = MTLLogStateDescriptor() logDescriptor.level = .debug logDescriptor.bufferSize = 2048 // Error Domain=MTLLogStateErrorDomain Code=2 "Cannot create residency set for MTLLogState: (null)" UserInfo={NSLocalizedDescription=Cannot create residency set for MTLLogState: (null)} let logState = try! device.makeLogState(descriptor: logDescriptor) commandBufferDescriptor.logState = logState } Some LLMs suggested that this is connected with Simulator, and truly, I run the tests on simulator. However tests don't want to run on iPhone... I found solution running them on My Mac (Mac Catalyst). Surprisingly descriptor log works there, even without MTLLogState. But the Simulator behaviour seems like a bug...
1
1
676
Jan ’26
GameKit not working as expected in iOS 26.
I just upgraded my macOS, Xcode and Simulator all to the newest beta version 26. Then I found two issues when building my app with Xcode 26 and running it on simulator 26. The game center access point no longer shows up in the app. This is how it's configured in the past. And it still works on simulator 18.4 func authenticatePlayer() { GKAccessPoint.shared.location = .topTrailing self.localPlayer.authenticateHandler = { viewController, error in if let viewController = viewController { // can present Game Center login screen } else if self.localPlayer.isAuthenticated { // game can be started } else { // user didn't log in, continue the game without game center } } } After game ended, the leaderboard won't load. This is how it's implemented in the past. It's still working in simulator 18.4 struct GameCenterView: UIViewControllerRepresentable { @Environment(\.presentationMode) var presentationMode ... func makeUIViewController(context: Context) -> GKGameCenterViewController { let viewController = GKGameCenterViewController( leaderboardID: getLeaderBoardID(with: leaderBoardGameMode), playerScope: .global, timeScope: .allTime ) viewController.gameCenterDelegate = context.coordinator return viewController } func updateUIViewController(_ uiViewController: GKGameCenterViewController, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, GKGameCenterControllerDelegate { let parent: GameCenterView init(_ parent: GameCenterView) { self.parent = parent } func gameCenterViewControllerDidFinish(_ gameCenterViewController: GKGameCenterViewController) { parent.presentationMode.wrappedValue.dismiss() } } }
5
2
488
Sep ’25
Per-vertex color. in a custom RealityKit mesh? (macOS)
I'm working on an application for viewing AMF models on macOS, using RealityKit. AMF supports several different ways to color models, including per-vertex color (where the color of a triangle is interpolated from vertex to vertex) as well as per-face color (where the color of the triangle is the same across the entire face). I'm trying to figure out how to support those color models using a RealityKit mesh. Apple's documentation (https://aninterestingwebsite.com/documentation/realitykit/modifying-realitykit-rendering-using-custom-materials) talks about per-vertex colors, but I haven't found a way to create a mesh that includes per-vertex colors, other than use a texture map (which might be the correct solution). Can someone give me some pointers?
6
2
2.0k
Nov ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
436
Jul ’25
Memory leak when no draw calls issued to encoder
I noticed that when the render command encoder adds no draw calls an apps memory usage seems to grow unboundedly. Using a super simple MTKView-based drawing with the following delegate (code at end). If I add the simplest of draw calls, e.g., a single vertex, the app's memory usage is normal, around 100-ish MBs. I am attaching a couple screenshot, one from Xcode and one from Instruments. What's going on here? Is this an illegal program? If yes, why does it not crash, such as if the encode or command buffer weren't ended. Or is there some race condition at play here due to the lack of draws? class Renderer: NSObject, MTKViewDelegate { var device: MTLDevice var commandQueue: MTL4CommandQueue var commandBuffer: MTL4CommandBuffer var allocator: MTL4CommandAllocator override init() { guard let d = MTLCreateSystemDefaultDevice(), let queue = d.makeMTL4CommandQueue(), let cmdBuffer = d.makeCommandBuffer(), let alloc = d.makeCommandAllocator() else { fatalError("unable to create metal 4 objects") } self.device = d self.commandQueue = queue self.commandBuffer = cmdBuffer self.allocator = alloc super.init() } func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {} func draw(in view: MTKView) { guard let drawable = view.currentDrawable else { return } commandBuffer.beginCommandBuffer(allocator: allocator) guard let descriptor = view.currentMTL4RenderPassDescriptor, let encoder = commandBuffer.makeRenderCommandEncoder( descriptor: descriptor ) else { fatalError("unable to create encoder") } encoder.endEncoding() commandBuffer.endCommandBuffer() commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) commandQueue.signalDrawable(drawable) drawable.present() } }
3
0
435
Jan ’26
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
3
1
711
Sep ’25
Are there complete code examples available for “Combine Metal 4 machine learning and graphics”?
Hello, I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://aninterestingwebsite.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder. While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4. Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to: Export a model to an .mlpackage, Convert it to an .mtlpackage, Use MTL4MachineLearningCommandEncoder alongside render passes, Or embed small neural networks directly in shaders using Shader ML? Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently. Thank you very much for your time and support! Best regards,
4
2
984
Nov ’25
Request low-latency streaming for iOS/iPadOS
Just found out this key available for visionOS https://aninterestingwebsite.com/documentation/bundleresources/entitlements/com.apple.developer.low-latency-streaming It seems to keep video streaming from being interrupted by AWDL, our community needs it badly for self-hosted game streaming (PC to iPhone / iPad). Related apps: Moonlight / VoidLink / SteamLink. Can we expect this on iOS/iPadOS 26, or even iOS/iPadOS 18 ?
Replies
1
Boosts
3
Views
416
Activity
Nov ’25
VRAM not freeing in Elite Dangerous
So I've been trying out GPTK with Elite Dangerous Horizons game and it looks like from what I can tell. The VRAM keeps going up until it goes over the limit where it drops the FPS to 1-3 FPS and then crashes the game. From the Performance HUD I can see that it looks like when using GPTK, the VRAM usage just keeps climbing and I never saw it drop down at all. I did some limited testing, and from that I think I can conclude that it is probably not a VRAM leak, but it might be caching it. The reason for this is because I noticed that if I went back to the area that I've been before. It won't increase the VRAM usage. So either there is something wrong with the freeing VRAM memory part, or it could be that GPTK might not be reporting the right amount of VRAM available to use? So maybe that's why it keeps allocating VRAM until it went out of memory and crashed the game. Just to test, I did try running the game with DXVK+MoltenVK combo, and I can see that it works just fine. VRAM is being freed up when it's no longer used. Is this a known issue in some games?
Replies
12
Boosts
3
Views
1k
Activity
Apr ’25
PDFKit Page Rerender
I'm experiencing an issue with PDFKit where page.removeAnnotation(annotation) successfully removes the annotation from the page's data structure, but the PDFView no longer updates automatically to reflect the change visually. Issue Details: The annotation is removed (verified by checking page.annotations.count) The PDFView display doesn't refresh to show the removal This code was working correctly before and suddenly stopped working No code changes were made on my end
Replies
3
Boosts
2
Views
900
Activity
Dec ’25
GameCenter scores are not being posted to the leaderboard
Hello! Bare with me here, as there is a lot to explain! I am working on implementing a Game Center high score leaderboard into my game. I have looked around for examples of how to properly implement this code, but have come up short on finding much material. Therefore, I have tried implementing it myself based off information I found on apples documentation. Long story short, I am getting success printed when I update my score, but no scores are actually being posted (or at-least no scores are showing up on the Game Center leaderboard when opened). Before I show the code, one thing I have questioned is the fact that this game is still in development. In AppStoreConnect, the status of the leaderboard is "Not Live". Does this affect scores being posted? Onto the code. I have created a GameCenter class which handles getting the leaderboards and posting scores to a specific leaderboard. I will post the code in whole, and will discuss below what is happening. PLEASE VIEW ATTACHED TEXT TO SEE THE GAMECENTER CLASS! GameCenter class - https://aninterestingwebsite.com/forums/content/attachment/0dd6dca8-8131-44c8-b928-77b3578bd970 In a different GameScene, once the game is over, I request to post a new high score to Game Center with this line of code: GameCenter.shared.submitScore(id: GameCenterLeaderboards.HighScore.rawValue) Now onto the logic of my code. For the longest time I struggled to figure out how to submit a score. I figured out that in Xcode 12, they deprecated a lot of functions that previously worked for me. Not is seems that we have to load all leaderboards (or the ones we want). That is the purpose behind the leaderboards private variable in the Game Center class. On the start up of the app, I call authenticate player. Once this callback is reached, I call loadLeaderboards which will load the leaderboards for each string id in an enum that I have elsewhere. Each of these leaderboards will be created as a Leaderboard object, and saved in the private leaderboard array. This is so I have access to these leaderboards later when I want to submit a score. Once the game is over, I am calling submitScore with the leaderboard id I want to post to. Right now, I only have a high score, but in the future I may add a parameter to this with the value so it works for other leaderboards as well. Therefore, no value is passed in since I am pulling from local storage which holds the high score. submitScore will get the leaderboard from the private leaderboard array that has the same id as the one passed in. Once I get the correct leaderboard, I submit a score to that leaderboard. Once the callback is hit, I receive the output "Successfully submitted score to leaderboard". This looks promising, except for the fact that no score is actually posted. At startup, I am calling updatePlayerHighScore, which is not complete - but for the purpose of my point, retrieves the high score of the player from the leaderboard and is printing it out to the console. It is printing out (0), meaning that no score was posted. The last thing I have questions about is the context when submitting a score. According to the documentation, this seems to just be metadata that GameCenter does not care about, but rather something the developer can use. Therefore, I think I can cross this off as causing the problem. I believe I implemented this correctly, but for some reason, nothing is posting to the leaderboard. This was ALOT, but I wanted to make sure I got all my thoughts down. Any help on why this is NOT posting would be awesome! Thanks so much! Mark
Replies
9
Boosts
2
Views
5.3k
Activity
Oct ’25
Pink Screen with VideoMaterial in ARKit
Hi everyone, I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content. Here's a simplified version of my setup: func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity { let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight let screenPlane = MeshResource.generatePlane(width: width, depth: height) let videoMaterial: Material = createVideoMaterial(videoItem: video) let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial]) return videoScreenModel } func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial { let player = AVPlayer(playerItem: videoItem) let videoMaterial = VideoMaterial(avPlayer: player) player.play() return videoMaterial } Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it? Thanks in advance!
Replies
16
Boosts
3
Views
1.6k
Activity
May ’25
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
Replies
5
Boosts
0
Views
1.1k
Activity
Sep ’25
BlendShapes don’t animate while playing animation in RealityKit
Hi everyone, I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender). The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible. The exact same blend shape animation works perfectly when no animation is playing. In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit Still, as soon as an animation starts, the shape keys don’t animate anymore. Here’s the test project on GitHub that demonstrates the issue clearly: https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates? Thanks in advance for any insights.
Replies
3
Boosts
3
Views
346
Activity
Jul ’25
Is migrating from ARView to RealityView recommended?
We're using RealityKit to create a science education AR app for iOS, iPadOS, and visionOS. In the WWDC25 session video "Bring your SceneKit project to RealityKit" https://aninterestingwebsite.com/videos/play/wwdc2025/288 at 8:15, it's explained that when using RealityKit, RealityView should be used in all cases, whereas in the past, SceneKit required SCNView, SceneView, or ARSCNView, depending on an app's requirements. Because the initial development of our app on iOS predates iOS 18's RealityView, our app currently uses ARView to render RealityKit AR content on iOS and iPadOS. Is it recommended that we migrate to RealityView, or can we safely continue using our existing ARView implementation? We'd prefer to avoid unnecessary development cost. If migrating from ARView to RealityView is recommended, what specific benefits should we expect from this transition? Thank you.
Replies
1
Boosts
2
Views
314
Activity
Jun ’25
MTL4FXTemporalDenoisedScaler initialization
I’m trying to use MTL4FXTemporalDenoisedScaler, and I’m seeing a crash during initialization even with a very simple sample app. I created a minimal sample here: https://github.com/tatsuya-ogawa/MetalFXInitExample The exception is: NSException: "-[AGXG16XFamilyHeap baseObject]: unrecognized selector sent to instance ..." What I found is: • This works: descriptor.makeTemporalDenoisedScaler(device: device) • This crashes: descriptor.makeTemporalDenoisedScaler(device: device, compiler: metal4Compiler) So the issue seems to happen only with the Metal4FX version. For testing, I’m using an iPhone 15 Pro. According to the Metal Feature Set Tables, MetalFX denoised upscaling should be supported on Apple9 and later, so I believe the device itself should meet the requirements. Reference: https://aninterestingwebsite.com/metal/Metal-Feature-Set-Tables.pdf Has anyone seen this before, or knows what might be causing it? I’d appreciate any advice. Thanks.
Replies
0
Boosts
2
Views
84
Activity
2w
Roblox freezing on MacOS 26.1 Beta
After updating to MacOS 26.1 I encountered an issue that Roblox tends to freeze quite often for 10 - 60 seconds at most, this is really annoying that it is doing this as i play the game a lot. My theory is that it is like a driver issue with metal or something, I have reinstalled MacOS, reinstalled the game and lowed the performance manually but nothing is working. Wondering if you could help, when it will be fixed and if others are having the same issue. Many thanks, William.
Replies
4
Boosts
2
Views
987
Activity
Oct ’25
SpriteKit scene used as SCNView.overlaySKScene crashes due to SKShapeNode
I recently published my first game on the App Store. It uses SceneKit with a SpriteKit overlay. All crashes Xcode downloaded for it so far are related to some SpriteKit/SceneKit internals. The most common crash is caused by SKCShapeNode::_NEW_copyRenderPathData. What could cause such a crash? crash.crash While developing this game (and the BoardGameKit framework that appears in the crash log) over the years I experienced many crashes presumably caused by the SpriteKit overlay (I opened a post SceneKit app randomly crashes with EXC_BAD_ACCESS in jet_context::set_fragment_texture about such a crash in September 2024), and other people on the internet also mention that they experience crashes when using SpriteKit as a SceneKit overlay. Should I use a separate SKView and lay it on top of SCNView rather than setting SCNView.overlaySKScene? That seemed to solve the crashes for a guy on stackoverflow, but is it also encouraged by Apple? I know SceneKit is deprecated, but according to Apple critical bugs would still be fixed. Could this be considered a critical bug?
Replies
4
Boosts
0
Views
777
Activity
Jan ’26
Spatial Scene API for iOS Apps
As part of the WWDC25 Keynote, a technology was announced that can present 2D images as 3D spatial scenes. This announcement is supported by a Press Release. ...developers can use the Spatial Scene API to make their app experience even more immersive. Zillow is taking advantage of the API for their Zillow Immersive app, allowing users to see images of homes and apartments with the rich depth and dimension that spatial scenes offer. The feature also appears in the Photos App on iOS 26 Developer Beta 1. Tapping "Spatial Scene" on any photo opens a view of that photo with a parallax effect. I've searched the WWDC sessions and new documentation and have come up short. Reaching out here for help. Is there any documentation for Spatial Scene API? Or any guidance on how to implement the spatial scene in iOS?
Replies
1
Boosts
2
Views
364
Activity
Jun ’25
Metal debug log in Swift Package
My goal is to print a debug message from a shader. I follow the guide that orders to set -fmetal-enable-logging metal compiler flag and following environment variables: MTL_LOG_LEVEL=MTLLogLevelDebug MTL_LOG_BUFFER_SIZE=2048 MTL_LOG_TO_STDERR=1 However there's an issue with the guide, it's only covers Xcode project setup, however I'm working on a Swift Package. It has a Metal-only target that's included into main target like this: targets: [ // A separate target for shaders. .target( name: "MetalShaders", resources: [ .process("Metal") ], plugins: [ // https://github.com/schwa/MetalCompilerPlugin .plugin(name: "MetalCompilerPlugin", package: "MetalCompilerPlugin") ] ), // Main target .target( name: "MegApp", dependencies: ["MetalShaders"] ), .testTarget( name: "MegAppTests", dependencies: [ "MegApp", "MetalShaders", ] ] So to apply compiler flag I use MetalCompilerPlugin which emits debug.metallib, it also allows to define DEBUG macro for shaders. This code compiles: #ifdef DEBUG logger.log_error("Hello There!"); os_log_default.log_debug("Hello thread: %d", gid); // this proves that code exectutes result.flag = true; #endif Environment is set via .xctestplan and valideted to work with ProcessInfo. However, nothing is printed to Xcode console nor to Console app. In attempt to fix it I'm trying to setup a MTLLogState, however the makeLogState(descriptor:) fails with error: if #available(iOS 18.0, *) { let logDescriptor = MTLLogStateDescriptor() logDescriptor.level = .debug logDescriptor.bufferSize = 2048 // Error Domain=MTLLogStateErrorDomain Code=2 "Cannot create residency set for MTLLogState: (null)" UserInfo={NSLocalizedDescription=Cannot create residency set for MTLLogState: (null)} let logState = try! device.makeLogState(descriptor: logDescriptor) commandBufferDescriptor.logState = logState } Some LLMs suggested that this is connected with Simulator, and truly, I run the tests on simulator. However tests don't want to run on iPhone... I found solution running them on My Mac (Mac Catalyst). Surprisingly descriptor log works there, even without MTLLogState. But the Simulator behaviour seems like a bug...
Replies
1
Boosts
1
Views
676
Activity
Jan ’26
What's wrong with simd_inverse?
When using simd_inverse functions, the same input yields different results on different devices. On the Mac mini with M4 pro and M3 ultra, the result is But on the Mac mini of M2 ultra, the result is
Replies
2
Boosts
0
Views
1k
Activity
Aug ’25
GameKit not working as expected in iOS 26.
I just upgraded my macOS, Xcode and Simulator all to the newest beta version 26. Then I found two issues when building my app with Xcode 26 and running it on simulator 26. The game center access point no longer shows up in the app. This is how it's configured in the past. And it still works on simulator 18.4 func authenticatePlayer() { GKAccessPoint.shared.location = .topTrailing self.localPlayer.authenticateHandler = { viewController, error in if let viewController = viewController { // can present Game Center login screen } else if self.localPlayer.isAuthenticated { // game can be started } else { // user didn't log in, continue the game without game center } } } After game ended, the leaderboard won't load. This is how it's implemented in the past. It's still working in simulator 18.4 struct GameCenterView: UIViewControllerRepresentable { @Environment(\.presentationMode) var presentationMode ... func makeUIViewController(context: Context) -> GKGameCenterViewController { let viewController = GKGameCenterViewController( leaderboardID: getLeaderBoardID(with: leaderBoardGameMode), playerScope: .global, timeScope: .allTime ) viewController.gameCenterDelegate = context.coordinator return viewController } func updateUIViewController(_ uiViewController: GKGameCenterViewController, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, GKGameCenterControllerDelegate { let parent: GameCenterView init(_ parent: GameCenterView) { self.parent = parent } func gameCenterViewControllerDidFinish(_ gameCenterViewController: GKGameCenterViewController) { parent.presentationMode.wrappedValue.dismiss() } } }
Replies
5
Boosts
2
Views
488
Activity
Sep ’25
Per-vertex color. in a custom RealityKit mesh? (macOS)
I'm working on an application for viewing AMF models on macOS, using RealityKit. AMF supports several different ways to color models, including per-vertex color (where the color of a triangle is interpolated from vertex to vertex) as well as per-face color (where the color of the triangle is the same across the entire face). I'm trying to figure out how to support those color models using a RealityKit mesh. Apple's documentation (https://aninterestingwebsite.com/documentation/realitykit/modifying-realitykit-rendering-using-custom-materials) talks about per-vertex colors, but I haven't found a way to create a mesh that includes per-vertex colors, other than use a texture map (which might be the correct solution). Can someone give me some pointers?
Replies
6
Boosts
2
Views
2.0k
Activity
Nov ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
Replies
3
Boosts
1
Views
436
Activity
Jul ’25
Memory leak when no draw calls issued to encoder
I noticed that when the render command encoder adds no draw calls an apps memory usage seems to grow unboundedly. Using a super simple MTKView-based drawing with the following delegate (code at end). If I add the simplest of draw calls, e.g., a single vertex, the app's memory usage is normal, around 100-ish MBs. I am attaching a couple screenshot, one from Xcode and one from Instruments. What's going on here? Is this an illegal program? If yes, why does it not crash, such as if the encode or command buffer weren't ended. Or is there some race condition at play here due to the lack of draws? class Renderer: NSObject, MTKViewDelegate { var device: MTLDevice var commandQueue: MTL4CommandQueue var commandBuffer: MTL4CommandBuffer var allocator: MTL4CommandAllocator override init() { guard let d = MTLCreateSystemDefaultDevice(), let queue = d.makeMTL4CommandQueue(), let cmdBuffer = d.makeCommandBuffer(), let alloc = d.makeCommandAllocator() else { fatalError("unable to create metal 4 objects") } self.device = d self.commandQueue = queue self.commandBuffer = cmdBuffer self.allocator = alloc super.init() } func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {} func draw(in view: MTKView) { guard let drawable = view.currentDrawable else { return } commandBuffer.beginCommandBuffer(allocator: allocator) guard let descriptor = view.currentMTL4RenderPassDescriptor, let encoder = commandBuffer.makeRenderCommandEncoder( descriptor: descriptor ) else { fatalError("unable to create encoder") } encoder.endEncoding() commandBuffer.endCommandBuffer() commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) commandQueue.signalDrawable(drawable) drawable.present() } }
Replies
3
Boosts
0
Views
435
Activity
Jan ’26
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
Replies
3
Boosts
1
Views
711
Activity
Sep ’25
Are there complete code examples available for “Combine Metal 4 machine learning and graphics”?
Hello, I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://aninterestingwebsite.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder. While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4. Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to: Export a model to an .mlpackage, Convert it to an .mtlpackage, Use MTL4MachineLearningCommandEncoder alongside render passes, Or embed small neural networks directly in shaders using Shader ML? Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently. Thank you very much for your time and support! Best regards,
Replies
4
Boosts
2
Views
984
Activity
Nov ’25