Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
3
1
253
Jun ’25
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
3
0
1.1k
Oct ’25
VisionOS Enterprise API Not Working
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps. We did the following: Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://aninterestingwebsite.com/documentation/visionos/accessing-the-main-camera I am just unable to receive camera frames. I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason. "Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
3
0
887
Dec ’25
ARKit Planes do not appear where expected on visionOS
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material. Each plane is sized based on the bounds of the anchor provided by ARKit. let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) Then I'm using this to position each entity. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off. Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is. When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be. Can you see what I'm getting wrong here? Full code below struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.transform = Transform(matrix: anchor.originFromAnchorTransform) return entity } }
3
0
217
Apr ’25
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
2
0
370
Aug ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://aninterestingwebsite.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
2
0
819
Mar ’26
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
2
0
894
Sep ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
523
Oct ’25
How do we use the new Unified Coordinate Conversion features in visionOS 26?
The landing page for visionOS 26 mentions The Unified Coordinate Conversion API makes moving views and entities between scenes straightforward — even between views and ARKit accessory anchors. This WWDC session very briefly shows a single example of using this, but with no context. For example, they discuss a way to tell the distance between a Model3D and an entity in a RealityView. But they don't provide any details for how they are referencing the entity (bolts in the slide). The session used the BOT-anist example project that we saw in visionOS 2, but the version on in the Sample Code library has not been updated with these examples. I was able to put together a simple example where we can get the position of a window relative to the world origin. It even updates when the user recenters. struct Lab080: View { @State private var posX: Float = 0 @State private var posY: Float = 0 @State private var posZ: Float = 0 var body: some View { GeometryReader3D { geometry in VStack { Text("Unified Coordinate Conversion") .font(.largeTitle) .padding(24) VStack { Text("X: \(posX)") Text("Y: \(posY)") Text("Z: \(posZ)") } .font(.title) .padding(24) } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action: { old, new in posX = Float(new.x) posY = Float(new.y) posZ = Float(new.z) } } } } This is all that I've been able to figure out so far. What other features are included in this new Unified Coordinate Conversion? Can we use this to get the position of one window relative to another? Can we use this to get the position of a view in a window relative to an entity in a RealityView, for example in a Volume or Immersive Space? What else can Unified Coordinate Conversion do? Are there documentation pages that I'm missing? I'm not sure what to search for. Are there any Sample projects that use these features? Any additional information would be very helpful.
2
5
1.5k
Sep ’25
Reality Composer to Reality Composer Pro?
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode. However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro. Am I missing something obvious here, because this seems like a huge oversight. If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro. Thanks Stan
2
1
408
Jun ’25
Help with visionOS pushWindow issues requested
I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below. Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart. (FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location (FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system (FB21594646) pushWindow interacts poorly with the window bar close app option (FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked (FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes (FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot (FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window (FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close (FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them. Questions: I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds? I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API? I'd appreciate any guidance Apple engineers could provide. Thank you.
2
3
853
1w
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
2
1
470
Sep ’25
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://aninterestingwebsite.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
2
0
385
Oct ’25
Cannot find devices in RemoteImmersiveSpace
Hi, I'm running the Spatial Rendering App sample on a Macbook Pro running 26.4 Beta and the Vision Pro running visionOS 26.3.1. Handoff and SharePlay are on, both devices are on the same Apple ID and network, and SharePlay screen sharing works fine between the two devices. However, when calling openImmersiveSpace, the device picker fails to present and no devices are found. Errors from console: ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - .../ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 638: (os/kern) failure (0x5) Unable to present an ImmersiveSpace for Scene id 'Compositor Services' Is this a known bug or I'm I missing something? Thanks!
2
1
1.2k
2w
Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
Replies
3
Boosts
1
Views
253
Activity
Jun ’25
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
Replies
3
Boosts
0
Views
1.1k
Activity
Oct ’25
VisionOS Enterprise API Not Working
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps. We did the following: Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://aninterestingwebsite.com/documentation/visionos/accessing-the-main-camera I am just unable to receive camera frames. I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason. "Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
Replies
3
Boosts
0
Views
887
Activity
Dec ’25
ARKit Planes do not appear where expected on visionOS
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material. Each plane is sized based on the bounds of the anchor provided by ARKit. let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) Then I'm using this to position each entity. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off. Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is. When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be. Can you see what I'm getting wrong here? Full code below struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.transform = Transform(matrix: anchor.originFromAnchorTransform) return entity } }
Replies
3
Boosts
0
Views
217
Activity
Apr ’25
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
Replies
2
Boosts
0
Views
370
Activity
Aug ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://aninterestingwebsite.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
Replies
2
Boosts
0
Views
819
Activity
Mar ’26
How to open control center in Vision Pro’s Xcode simulator
I want to open the control center in Vision Pro’s Xcode simulator. Can I open it? If I can, please tell me how to do it. Thank you.
Replies
2
Boosts
0
Views
436
Activity
Aug ’25
What is the environment in the Vision Pro simulator sidebar?
If I long press on an element, the sidebar disappears and then a Done appears on the screen, but nothing else changes, so what are the Environments in Vision Pro's Simulator?
Replies
2
Boosts
0
Views
107
Activity
Aug ’25
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
Replies
2
Boosts
0
Views
894
Activity
Sep ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
Replies
2
Boosts
0
Views
523
Activity
Oct ’25
The skeletal model moves
I want to make a model with added bones move by dragging it with gestures,This is a model exported using Blender, What I understand is using IKComponent, but I don't know how to use it specifically
Replies
2
Boosts
0
Views
161
Activity
Apr ’25
How do we use the new Unified Coordinate Conversion features in visionOS 26?
The landing page for visionOS 26 mentions The Unified Coordinate Conversion API makes moving views and entities between scenes straightforward — even between views and ARKit accessory anchors. This WWDC session very briefly shows a single example of using this, but with no context. For example, they discuss a way to tell the distance between a Model3D and an entity in a RealityView. But they don't provide any details for how they are referencing the entity (bolts in the slide). The session used the BOT-anist example project that we saw in visionOS 2, but the version on in the Sample Code library has not been updated with these examples. I was able to put together a simple example where we can get the position of a window relative to the world origin. It even updates when the user recenters. struct Lab080: View { @State private var posX: Float = 0 @State private var posY: Float = 0 @State private var posZ: Float = 0 var body: some View { GeometryReader3D { geometry in VStack { Text("Unified Coordinate Conversion") .font(.largeTitle) .padding(24) VStack { Text("X: \(posX)") Text("Y: \(posY)") Text("Z: \(posZ)") } .font(.title) .padding(24) } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action: { old, new in posX = Float(new.x) posY = Float(new.y) posZ = Float(new.z) } } } } This is all that I've been able to figure out so far. What other features are included in this new Unified Coordinate Conversion? Can we use this to get the position of one window relative to another? Can we use this to get the position of a view in a window relative to an entity in a RealityView, for example in a Volume or Immersive Space? What else can Unified Coordinate Conversion do? Are there documentation pages that I'm missing? I'm not sure what to search for. Are there any Sample projects that use these features? Any additional information would be very helpful.
Replies
2
Boosts
5
Views
1.5k
Activity
Sep ’25
Show text overlay for ImagePresentationComponent
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
Replies
2
Boosts
0
Views
193
Activity
Jun ’25
The folding and unfolding effect of the NBA sand table
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
Replies
2
Boosts
0
Views
133
Activity
May ’25
Reality Composer to Reality Composer Pro?
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode. However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro. Am I missing something obvious here, because this seems like a huge oversight. If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro. Thanks Stan
Replies
2
Boosts
1
Views
408
Activity
Jun ’25
Help with visionOS pushWindow issues requested
I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below. Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart. (FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location (FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system (FB21594646) pushWindow interacts poorly with the window bar close app option (FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked (FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes (FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot (FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window (FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close (FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them. Questions: I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds? I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API? I'd appreciate any guidance Apple engineers could provide. Thank you.
Replies
2
Boosts
3
Views
853
Activity
1w
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
Replies
2
Boosts
1
Views
470
Activity
Sep ’25
GestureComponent bug on visionOS
let component = GestureComponent(DragGesture()) iOS: ☑️ visionOS: ❌ This bug from beta to public, please fix it.
Replies
2
Boosts
0
Views
444
Activity
Sep ’25
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://aninterestingwebsite.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
Replies
2
Boosts
0
Views
385
Activity
Oct ’25
Cannot find devices in RemoteImmersiveSpace
Hi, I'm running the Spatial Rendering App sample on a Macbook Pro running 26.4 Beta and the Vision Pro running visionOS 26.3.1. Handoff and SharePlay are on, both devices are on the same Apple ID and network, and SharePlay screen sharing works fine between the two devices. However, when calling openImmersiveSpace, the device picker fails to present and no devices are found. Errors from console: ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - .../ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 638: (os/kern) failure (0x5) Unable to present an ImmersiveSpace for Scene id 'Compositor Services' Is this a known bug or I'm I missing something? Thanks!
Replies
2
Boosts
1
Views
1.2k
Activity
2w