Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Current Apple Forum about ARKit and visionOS
Recently, questions about ARKit/visionOS seem to be being asked in the Apple forum by internal Apple engineers. Inexperienced and untested makeshift features are being offered, putting average but experienced developers in a difficult position. They are unable to react and get something useful from the posts. Apple needs to review the situation.
1
0
372
Sep ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://aninterestingwebsite.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
2
0
820
Mar ’26
Occlusion issues in Immersive Space - Breaking User Input Interaction
I'm developing a custom gesture-based visionOS project that uses hand tracking with collision detection spheres on fingers to register user interactions through collision components. I'm experiencing a critical occlusion issue where collision detection spheres are intermittently occluded by the background/depth buffer, causing fingers to pass through the 3D model entities without registering interactions. Detailed Description: I have added 3D entities in an immersive scene with collision spheres attached to fingers for detecting user interactions. Each sphere has: CollisionComponent with sphere shape Proper collision masks and groups configured Real-time position updates from hand joint transforms Each entity has: InputTarget components to register collisions The Issue: When users move their fingers to the entity to interact, some collision spheres (particularly on the pinkie and ring fingers) become occluded and pass directly through the 3D model without triggering collision events. Meanwhile, other fingers (like the index finger) continue to work correctly. This appears to be a depth perception/z-buffer issue between the model entity and the hand tracking collision spheres Questions: Is there a recommended approach for maintaining consistent depth ordering between hand-tracking entities and 3D models in immersive spaces to prevent occlusion issues? Should I be using AnchorEntities to anchor the entity to a plane or world position to establish a more stable depth reference? Are there specific RenderingComponent or material settings that could help ensure collision entities maintain their depth priority and don't get occluded? Could this be related to z-fighting when collision spheres and entity geometry occupy similar depth ranges? If so, what's the recommended depth bias approach? Is there a better architectural approach for implementing interactions with custom hand gesture tracking that avoids these depth perception issues? What Would Help: Implementation guidance for ensuring reliable collision detection between hand-tracked entities through custom gestures and 3D models. Best practices for depth management in immersive spaces with custom hand gesture tracking. Sample code demonstrating stable hand-to-object interaction patterns. Information about whether this is a known limitation or if there are specific APIs I should be leveraging This issue is significantly impacting the reliability of our app experience, as users cannot consistently interact with all model components. Any guidance from Apple engineers or developers who have solved similar depth/occlusion challenges would be greatly appreciated. Additional Context: This is for a productivity-focused application where accuracy and reliability are critical. Thank you for any assistance!
0
0
441
Nov ’25
Blender Geometry Nodes to Reality Composer Pro
Hello! Back from last week's amazing visit to Cupertino for the Game Dev session and diving back into Vision Pro experimentation. I've exported a simple geometry nodes with animation test from Blender for use in RCP, with intended output to Vision Pro. I've attached a few screenshots showing the node setup and how it animates over time. I select the Cube mesh and export as .usdc with animation. In the finder via quick look, I can actually see it working! If I try exporting as .usdz, however, i'm not seeing any animation in the finder preview. Next, I import the .usdc file to RCP and add an Animation Library component to the cube mesh, but am not seeing any animation selectable, even though I see animation playing back in preview. Next, I import the .usdc into Maya (via proper USD Stage pipeline - i'm learning to be USD compliant for authoring!) to verify if the animation is working, and it does. What step(s) am I missing to get this working in Reality Composer Pro? My goal is to experiment with animating these geometry node instances - along with color animation if possible - over to Vision Pro for full scale, immersive presentation. Of particular note, I am not a programmer, so I am trying my best to brute force this the only way I currently know possible, by keyframe animation and importing through Reality Composer Pro. I realize that, ideally, I should be learning how to leverage the code portion so I can start programatically controlling my 3d entities (with animation), but need more hand holding and real-world examples to help me get there. Thx!
1
0
448
Nov ’25
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
139
Apr ’25
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
0
0
121
Nov ’25
visionOS 3d interactions like the native keyboard when no longer observed in passthrough
While using apple's vision pro, we noticed that we can continue to use the visionOS keyboard when we no longer actually see it in passthrough. In other words, when we focus on a field to type, visionOS displays the keyboard for us in such a way that we actually see it. Then, we noticed if we look away a little bit, either up, or down, or left, or right, in such a way that the keyboard is no longer visible by us in the passthrough, the keyboard still remains responsive to taps from our fingers at the location where it is. It seems the keyboard remains functional and responsive to taps even though we can no longer observe/see it. We are trying to figure out how to implement similar functionality in our app whereby the user can continue to manipulate a 3d entity when the user can no longer actually observe it in passthrough (like the visionOS keyboard appears to allow). I assume the visionOS keyboard has this functionality thanks to the downward facing sensors on the hardware that allow hand tracking even though the hands can no longer be observed by the user. That is likely how we can rest our hands on our lap is still be able to interact with visionOS. How can we implement a similar functionality for 3D entities? Is there a way to tap in, or to allow hand tracking, from those toward facing cameras? Is it possible to manipulate a 3D entity when it is no longer observed by the user for example when they shift their attention somewhere else in the field of vision? How does the visionOS keyboard achieve this?
1
0
341
Nov ’25
ARKit / visionOS - handtracking with 3D objects attached on hand
I use ARKit's hand tracking to attach a 3D model of a remote control to the left hand. The user is supposed to press buttons on the remote control. In the Vision Pro settings, I have removed the left hand from Hands & Eye Tracking. Only the right hand is used. The problem now is that the left hand appears and the 3D model of the remote control fades out. I want the remote control to be completely visible. The user should feel like they really have the remote control in their hand. Can I prevent the fading out?
1
0
283
Nov ’25
RealityRenderer's Perspective Camera's FOV
Hi, I have been using RealityRenderer to render scenes in MacOS as spatial videos and view it in Vision Pro and it is awesome. I understand that it uses PerspectiveCamera to render. I wanted to know what is the default FOV for this camera and how much can we push it? I want to ideally render a scene with 180 degrees of fov. Thanks
1
0
127
May ’25
Spatial-backdrop standards process
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark). I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress. Is there any place I can keep an eye on this standards process? Has Apple announced any feature updates or news on spatial-backdrops?
0
0
153
Nov ’25
Crash when Displaying RealityView on Multiple Screen only Connecting with Xcode
I have an iOS app that uses RealityView to display some models and interact with them, and the app uses regular iOS app navigations, then a challenge I'm facing is how to maintain multiple RealityView on multiplescreens. For example Screen A has a RealityView, and then I navigate to Screen B (also has a RealityView) using stack based navigation, when I do so I got a crash -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation Fragment Function(fsRealityPbr): argument envProbeTable[0] from Buffer(7) with offset(0) and length(16) has space for 16 bytes, but argument has a length(864). Fragment Function(fsRealityPbr): incorrect type of texture (MTLTextureType2D) bound at Texture binding at index 20 (expect MTLTextureTypeCubeArray) for envProbeDiffuseArray[0]. Interestingly this crash only happens when debugging with Xcode, not happens when the app runs on its own. I'm not sure what I'm doing is anti-pattern or it's some Xcode debugging limitation.
0
0
749
Mar ’26
ARView environment.lighting IBL from HDR file
I have an iOS app that can display a USDZ model downloaded from the Internet (and cached locally) via an ARView. I would like to light that model with an image based light (IBL) also downloaded from the Internet. However, as far as I can tell, ARView can only create an IBL from a resource that has been compiled into the Xcode project and loaded with EnvironmentResource(named:in:) or EnvironmentResource.load(named:in:). Is there a way to create an EnvironmentResource from an HDRI via a file URL to use in ARView in iOS?
0
0
758
Nov ’25
version update in Vision Pro
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't. in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up. here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards: func loadMainScene(at position: SIMD3) async { guard let content = self.content else { return } do { let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle) rootEntity.scale = SIMD3<Float>(repeating: 0.5) rootEntity.generateCollisionShapes(recursive: true) self.modelRootEntity = rootEntity let bounds = rootEntity.visualBounds(relativeTo: nil) print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)") let pivotEntity = Entity() pivotEntity.addChild(rootEntity) self.pivotEntity = pivotEntity let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8]) modelAnchor.addChild(pivotEntity) content.add(modelAnchor) updateModelOpacity(0.5) self.modelAnchor = modelAnchor rootEntity.visit { entity in print("👀 Entity in model: \(entity.name)") if entity.name.lowercased().hasPrefix("focus") { entity.generateCollisionShapes(recursive: true) entity.components.set(InputTargetComponent()) print("🎯 Made tappable: \(entity.name)") } } print("✅ Model loaded with collisions") guard let sphere = placementSphere else { return } let sphereWorldXform = sphere.transformMatrix(relativeTo: nil) var newXform = sphereWorldXform newXform.columns.3.y += 0.1 // move up by 20 cm let gridAnchor = AnchorEntity(world: newXform) self.gridAnchor = gridAnchor content.add(gridAnchor) let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle) let gridSizeX = 18 let gridSizeY = 10 let gridSizeZ = 10 let spacing: Float = 0.05 let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3 let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1 let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5 for i in 0..<gridSizeX { for j in 0..<gridSizeY { for k in 0..<gridSizeZ { if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top let cell = baseScene.clone(recursive: true) cell.name = "Sphere" cell.scale = .one * 0.02 cell.position = [ startX + Float(i) * spacing, startY + Float(j) * spacing, startZ + Float(k) * spacing ] cell.generateCollisionShapes(recursive: true) gridCells.append(cell) gridAnchor.addChild(cell) } } } content.add(gridAnchor) print("✅ Grid added") } catch { print("❌ Failed to load: \(error)") } } private func handleModelOrGridTap(_ tappedEntity: Entity) { guard let modelRootEntity = modelRootEntity else { return } let localPosition = tappedEntity.position(relativeTo: modelRootEntity) let worldPosition = tappedEntity.position(relativeTo: nil) switch tapStep { case 0: modelPointA = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0])) print("📍 Model point A: \(localPosition)") tapStep += 1 case 1: modelPointB = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0])) print("📍 Model point B: \(localPosition)") tapStep += 1 case 2: targetPointA = worldPosition targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0]) modelAnchor?.addChild(targetMarkerA!) print("✅ Target point A: \(worldPosition)") tapStep += 1 case 3: targetPointB = worldPosition targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1]) modelAnchor?.addChild(targetMarkerB!) print("✅ Target point B: \(worldPosition)") alignmentReady = true tapStep += 1 default: print("⚠️ Unexpected tap on model helper at step \(tapStep)") } } func alignModel2Points() { guard let modelPointA = modelPointA, let modelPointB = modelPointB, let targetPointA = targetPointA, let targetPointB = targetPointB, let modelRootEntity = modelRootEntity, let pivotEntity = pivotEntity, let modelAnchor = modelAnchor else { print("❌ Missing data for alignment") return } let modelVec = modelPointB - modelPointA let targetVec = targetPointB - targetPointA let modelLength = length(modelVec) let targetLength = length(targetVec) let scale = targetLength / modelLength let modelDir = normalize(modelVec) let targetDir = normalize(targetVec) var axis = cross(modelDir, targetDir) let axisLength = length(axis) var rotation = simd_quatf() if axisLength < 1e-6 { if dot(modelDir, targetDir) > 0 { rotation = simd_quatf(angle: 0, axis: [0,1,0]) } else { let up: SIMD3<Float> = [0,1,0] axis = cross(modelDir, up) if length(axis) < 1e-6 { axis = cross(modelDir, [1,0,0]) } rotation = simd_quatf(angle: .pi, axis: normalize(axis)) } } else { let dotProduct = dot(modelDir, targetDir) let clampedDot = max(-1.0, min(dotProduct, 1.0)) let angle = acos(clampedDot) rotation = simd_quatf(angle: angle, axis: normalize(axis)) } modelRootEntity.scale = .one * scale modelRootEntity.orientation = rotation let transformedPointA = rotation.act(modelPointA * scale) pivotEntity.position = -transformedPointA modelAnchor.position = targetPointA alignedModelPosition = modelAnchor.position print("✅ Aligned with scale \(scale), rotation \(rotation)")
2
0
341
Oct ’25
PhotogrammetrySession Polygon Count Limit – How Is It Determined by Hardware?
Hi Apple Team, I’m working on a human portrait scanning application using PhotogrammetrySession, and I’ve been very impressed by the results. Thank you for building such a powerful and accessible photogrammetry solution into macOS! I do, however, have a question regarding mesh detail limitations on different Mac hardware configurations. When using PhotogrammetrySession.Request.Detail.custom and trying to set maximumPolygonCount = 1000000, I see the following log message: Clamped max poly count: 1000000 to device limit. 250000 is used. This is on an M1 Max with 32 GB RAM. I’m aware that PhotogrammetrySession.limits can report values like maximumInputImageDimension and maximumNumberOfInputImages, but I haven’t found documentation on how the maximumPolygonCount is determined, and what hardware specs influence it. Is it tied more to: • GPU performance (e.g. neural/graphics cores)? • CPU architecture? • Memory size or bandwidth? • Or is it fixed per SoC generation? I’d love to understand what kind of hardware upgrades (e.g. moving to M4 Pro or increasing RAM) could allow me to increase mesh complexity and generate more detailed models. Any insights would be greatly appreciated—and if this is covered in upcoming WWDC sessions or documentation, I’d be happy to tune in. Thanks in advance! KitCheng
0
0
158
May ’25
What is the environment in the Vision Pro simulator sidebar?
If I long press on an element, the sidebar disappears and then a Done appears on the screen, but nothing else changes, so what are the Environments in Vision Pro's Simulator?
Replies
2
Boosts
0
Views
107
Activity
Aug ’25
Current Apple Forum about ARKit and visionOS
Recently, questions about ARKit/visionOS seem to be being asked in the Apple forum by internal Apple engineers. Inexperienced and untested makeshift features are being offered, putting average but experienced developers in a difficult position. They are unable to react and get something useful from the posts. Apple needs to review the situation.
Replies
1
Boosts
0
Views
372
Activity
Sep ’25
RotationSystem and RotationComponent API Updates for visionOS 26 Beta
Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
Replies
1
Boosts
0
Views
63
Activity
Jun ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://aninterestingwebsite.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
Replies
2
Boosts
0
Views
820
Activity
Mar ’26
Occlusion issues in Immersive Space - Breaking User Input Interaction
I'm developing a custom gesture-based visionOS project that uses hand tracking with collision detection spheres on fingers to register user interactions through collision components. I'm experiencing a critical occlusion issue where collision detection spheres are intermittently occluded by the background/depth buffer, causing fingers to pass through the 3D model entities without registering interactions. Detailed Description: I have added 3D entities in an immersive scene with collision spheres attached to fingers for detecting user interactions. Each sphere has: CollisionComponent with sphere shape Proper collision masks and groups configured Real-time position updates from hand joint transforms Each entity has: InputTarget components to register collisions The Issue: When users move their fingers to the entity to interact, some collision spheres (particularly on the pinkie and ring fingers) become occluded and pass directly through the 3D model without triggering collision events. Meanwhile, other fingers (like the index finger) continue to work correctly. This appears to be a depth perception/z-buffer issue between the model entity and the hand tracking collision spheres Questions: Is there a recommended approach for maintaining consistent depth ordering between hand-tracking entities and 3D models in immersive spaces to prevent occlusion issues? Should I be using AnchorEntities to anchor the entity to a plane or world position to establish a more stable depth reference? Are there specific RenderingComponent or material settings that could help ensure collision entities maintain their depth priority and don't get occluded? Could this be related to z-fighting when collision spheres and entity geometry occupy similar depth ranges? If so, what's the recommended depth bias approach? Is there a better architectural approach for implementing interactions with custom hand gesture tracking that avoids these depth perception issues? What Would Help: Implementation guidance for ensuring reliable collision detection between hand-tracked entities through custom gestures and 3D models. Best practices for depth management in immersive spaces with custom hand gesture tracking. Sample code demonstrating stable hand-to-object interaction patterns. Information about whether this is a known limitation or if there are specific APIs I should be leveraging This issue is significantly impacting the reliability of our app experience, as users cannot consistently interact with all model components. Any guidance from Apple engineers or developers who have solved similar depth/occlusion challenges would be greatly appreciated. Additional Context: This is for a productivity-focused application where accuracy and reliability are critical. Thank you for any assistance!
Replies
0
Boosts
0
Views
441
Activity
Nov ’25
Blender Geometry Nodes to Reality Composer Pro
Hello! Back from last week's amazing visit to Cupertino for the Game Dev session and diving back into Vision Pro experimentation. I've exported a simple geometry nodes with animation test from Blender for use in RCP, with intended output to Vision Pro. I've attached a few screenshots showing the node setup and how it animates over time. I select the Cube mesh and export as .usdc with animation. In the finder via quick look, I can actually see it working! If I try exporting as .usdz, however, i'm not seeing any animation in the finder preview. Next, I import the .usdc file to RCP and add an Animation Library component to the cube mesh, but am not seeing any animation selectable, even though I see animation playing back in preview. Next, I import the .usdc into Maya (via proper USD Stage pipeline - i'm learning to be USD compliant for authoring!) to verify if the animation is working, and it does. What step(s) am I missing to get this working in Reality Composer Pro? My goal is to experiment with animating these geometry node instances - along with color animation if possible - over to Vision Pro for full scale, immersive presentation. Of particular note, I am not a programmer, so I am trying my best to brute force this the only way I currently know possible, by keyframe animation and importing through Reality Composer Pro. I realize that, ideally, I should be learning how to leverage the code portion so I can start programatically controlling my 3d entities (with animation), but need more hand holding and real-world examples to help me get there. Thx!
Replies
1
Boosts
0
Views
448
Activity
Nov ’25
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
Replies
0
Boosts
0
Views
139
Activity
Apr ’25
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
Replies
0
Boosts
0
Views
121
Activity
Nov ’25
visionOS 3d interactions like the native keyboard when no longer observed in passthrough
While using apple's vision pro, we noticed that we can continue to use the visionOS keyboard when we no longer actually see it in passthrough. In other words, when we focus on a field to type, visionOS displays the keyboard for us in such a way that we actually see it. Then, we noticed if we look away a little bit, either up, or down, or left, or right, in such a way that the keyboard is no longer visible by us in the passthrough, the keyboard still remains responsive to taps from our fingers at the location where it is. It seems the keyboard remains functional and responsive to taps even though we can no longer observe/see it. We are trying to figure out how to implement similar functionality in our app whereby the user can continue to manipulate a 3d entity when the user can no longer actually observe it in passthrough (like the visionOS keyboard appears to allow). I assume the visionOS keyboard has this functionality thanks to the downward facing sensors on the hardware that allow hand tracking even though the hands can no longer be observed by the user. That is likely how we can rest our hands on our lap is still be able to interact with visionOS. How can we implement a similar functionality for 3D entities? Is there a way to tap in, or to allow hand tracking, from those toward facing cameras? Is it possible to manipulate a 3D entity when it is no longer observed by the user for example when they shift their attention somewhere else in the field of vision? How does the visionOS keyboard achieve this?
Replies
1
Boosts
0
Views
341
Activity
Nov ’25
ARKit / visionOS - handtracking with 3D objects attached on hand
I use ARKit's hand tracking to attach a 3D model of a remote control to the left hand. The user is supposed to press buttons on the remote control. In the Vision Pro settings, I have removed the left hand from Hands & Eye Tracking. Only the right hand is used. The problem now is that the left hand appears and the 3D model of the remote control fades out. I want the remote control to be completely visible. The user should feel like they really have the remote control in their hand. Can I prevent the fading out?
Replies
1
Boosts
0
Views
283
Activity
Nov ’25
RealityRenderer's Perspective Camera's FOV
Hi, I have been using RealityRenderer to render scenes in MacOS as spatial videos and view it in Vision Pro and it is awesome. I understand that it uses PerspectiveCamera to render. I wanted to know what is the default FOV for this camera and how much can we push it? I want to ideally render a scene with 180 degrees of fov. Thanks
Replies
1
Boosts
0
Views
127
Activity
May ’25
Spatial-backdrop standards process
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark). I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress. Is there any place I can keep an eye on this standards process? Has Apple announced any feature updates or news on spatial-backdrops?
Replies
0
Boosts
0
Views
153
Activity
Nov ’25
Foveated Streaming - Can I stream Immersive Video from Mac
This is a very exciting feature in 26.4 beta. But from the document, it seems can only integrate with NVIDIA CloudXR™ SDK. I'm wondering if it's possible to use this tool to stream immersive video from Mac to Vision Pro?
Replies
1
Boosts
0
Views
738
Activity
4w
Crash when Displaying RealityView on Multiple Screen only Connecting with Xcode
I have an iOS app that uses RealityView to display some models and interact with them, and the app uses regular iOS app navigations, then a challenge I'm facing is how to maintain multiple RealityView on multiplescreens. For example Screen A has a RealityView, and then I navigate to Screen B (also has a RealityView) using stack based navigation, when I do so I got a crash -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation Fragment Function(fsRealityPbr): argument envProbeTable[0] from Buffer(7) with offset(0) and length(16) has space for 16 bytes, but argument has a length(864). Fragment Function(fsRealityPbr): incorrect type of texture (MTLTextureType2D) bound at Texture binding at index 20 (expect MTLTextureTypeCubeArray) for envProbeDiffuseArray[0]. Interestingly this crash only happens when debugging with Xcode, not happens when the app runs on its own. I'm not sure what I'm doing is anti-pattern or it's some Xcode debugging limitation.
Replies
0
Boosts
0
Views
749
Activity
Mar ’26
ARView environment.lighting IBL from HDR file
I have an iOS app that can display a USDZ model downloaded from the Internet (and cached locally) via an ARView. I would like to light that model with an image based light (IBL) also downloaded from the Internet. However, as far as I can tell, ARView can only create an IBL from a resource that has been compiled into the Xcode project and loaded with EnvironmentResource(named:in:) or EnvironmentResource.load(named:in:). Is there a way to create an EnvironmentResource from an HDRI via a file URL to use in ARView in iOS?
Replies
0
Boosts
0
Views
758
Activity
Nov ’25
Sky Dome vs. Sky Sphere
In Reality Composer Pro, why is the Sky Sphere so much larger than the Sky Dome? By my estimate, the Sky Sphere has a radius of 100m, while the Sky only has a radius of only 12m.
Replies
0
Boosts
0
Views
395
Activity
Oct ’25
version update in Vision Pro
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't. in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up. here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards: func loadMainScene(at position: SIMD3) async { guard let content = self.content else { return } do { let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle) rootEntity.scale = SIMD3<Float>(repeating: 0.5) rootEntity.generateCollisionShapes(recursive: true) self.modelRootEntity = rootEntity let bounds = rootEntity.visualBounds(relativeTo: nil) print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)") let pivotEntity = Entity() pivotEntity.addChild(rootEntity) self.pivotEntity = pivotEntity let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8]) modelAnchor.addChild(pivotEntity) content.add(modelAnchor) updateModelOpacity(0.5) self.modelAnchor = modelAnchor rootEntity.visit { entity in print("👀 Entity in model: \(entity.name)") if entity.name.lowercased().hasPrefix("focus") { entity.generateCollisionShapes(recursive: true) entity.components.set(InputTargetComponent()) print("🎯 Made tappable: \(entity.name)") } } print("✅ Model loaded with collisions") guard let sphere = placementSphere else { return } let sphereWorldXform = sphere.transformMatrix(relativeTo: nil) var newXform = sphereWorldXform newXform.columns.3.y += 0.1 // move up by 20 cm let gridAnchor = AnchorEntity(world: newXform) self.gridAnchor = gridAnchor content.add(gridAnchor) let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle) let gridSizeX = 18 let gridSizeY = 10 let gridSizeZ = 10 let spacing: Float = 0.05 let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3 let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1 let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5 for i in 0..<gridSizeX { for j in 0..<gridSizeY { for k in 0..<gridSizeZ { if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top let cell = baseScene.clone(recursive: true) cell.name = "Sphere" cell.scale = .one * 0.02 cell.position = [ startX + Float(i) * spacing, startY + Float(j) * spacing, startZ + Float(k) * spacing ] cell.generateCollisionShapes(recursive: true) gridCells.append(cell) gridAnchor.addChild(cell) } } } content.add(gridAnchor) print("✅ Grid added") } catch { print("❌ Failed to load: \(error)") } } private func handleModelOrGridTap(_ tappedEntity: Entity) { guard let modelRootEntity = modelRootEntity else { return } let localPosition = tappedEntity.position(relativeTo: modelRootEntity) let worldPosition = tappedEntity.position(relativeTo: nil) switch tapStep { case 0: modelPointA = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0])) print("📍 Model point A: \(localPosition)") tapStep += 1 case 1: modelPointB = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0])) print("📍 Model point B: \(localPosition)") tapStep += 1 case 2: targetPointA = worldPosition targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0]) modelAnchor?.addChild(targetMarkerA!) print("✅ Target point A: \(worldPosition)") tapStep += 1 case 3: targetPointB = worldPosition targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1]) modelAnchor?.addChild(targetMarkerB!) print("✅ Target point B: \(worldPosition)") alignmentReady = true tapStep += 1 default: print("⚠️ Unexpected tap on model helper at step \(tapStep)") } } func alignModel2Points() { guard let modelPointA = modelPointA, let modelPointB = modelPointB, let targetPointA = targetPointA, let targetPointB = targetPointB, let modelRootEntity = modelRootEntity, let pivotEntity = pivotEntity, let modelAnchor = modelAnchor else { print("❌ Missing data for alignment") return } let modelVec = modelPointB - modelPointA let targetVec = targetPointB - targetPointA let modelLength = length(modelVec) let targetLength = length(targetVec) let scale = targetLength / modelLength let modelDir = normalize(modelVec) let targetDir = normalize(targetVec) var axis = cross(modelDir, targetDir) let axisLength = length(axis) var rotation = simd_quatf() if axisLength < 1e-6 { if dot(modelDir, targetDir) > 0 { rotation = simd_quatf(angle: 0, axis: [0,1,0]) } else { let up: SIMD3<Float> = [0,1,0] axis = cross(modelDir, up) if length(axis) < 1e-6 { axis = cross(modelDir, [1,0,0]) } rotation = simd_quatf(angle: .pi, axis: normalize(axis)) } } else { let dotProduct = dot(modelDir, targetDir) let clampedDot = max(-1.0, min(dotProduct, 1.0)) let angle = acos(clampedDot) rotation = simd_quatf(angle: angle, axis: normalize(axis)) } modelRootEntity.scale = .one * scale modelRootEntity.orientation = rotation let transformedPointA = rotation.act(modelPointA * scale) pivotEntity.position = -transformedPointA modelAnchor.position = targetPointA alignedModelPosition = modelAnchor.position print("✅ Aligned with scale \(scale), rotation \(rotation)")
Replies
2
Boosts
0
Views
341
Activity
Oct ’25
Rating modal in Apple Vision pro
If I trigger the apple rating modal in an Immersive space it appears on the ground in (0,0,0) I need it to be in front of the user like push notification perimssion does or other permissions requests.
Replies
1
Boosts
0
Views
375
Activity
Oct ’25
How to make device cross into portal world.
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?
Replies
1
Boosts
0
Views
141
Activity
Apr ’25
PhotogrammetrySession Polygon Count Limit – How Is It Determined by Hardware?
Hi Apple Team, I’m working on a human portrait scanning application using PhotogrammetrySession, and I’ve been very impressed by the results. Thank you for building such a powerful and accessible photogrammetry solution into macOS! I do, however, have a question regarding mesh detail limitations on different Mac hardware configurations. When using PhotogrammetrySession.Request.Detail.custom and trying to set maximumPolygonCount = 1000000, I see the following log message: Clamped max poly count: 1000000 to device limit. 250000 is used. This is on an M1 Max with 32 GB RAM. I’m aware that PhotogrammetrySession.limits can report values like maximumInputImageDimension and maximumNumberOfInputImages, but I haven’t found documentation on how the maximumPolygonCount is determined, and what hardware specs influence it. Is it tied more to: • GPU performance (e.g. neural/graphics cores)? • CPU architecture? • Memory size or bandwidth? • Or is it fixed per SoC generation? I’d love to understand what kind of hardware upgrades (e.g. moving to M4 Pro or increasing RAM) could allow me to increase mesh complexity and generate more detailed models. Any insights would be greatly appreciated—and if this is covered in upcoming WWDC sessions or documentation, I’d be happy to tune in. Thanks in advance! KitCheng
Replies
0
Boosts
0
Views
158
Activity
May ’25