Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

108 Posts

Post

Replies

Boosts

Views

Activity

Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
1
0
291
1d
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
3
1
893
5d
RealityView camera feed not shown
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen. If I show ChildView directly, it works with camera feed. Please help me on this issue? Thanks. import RealityKit import SwiftUI struct ParentView: View{ @State private var showIt = false var body: some View{ ZStack{ RealityView{content in content.camera = .virtual let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)]) content.add(box) } Button("Click here"){ showIt = true } } .fullScreenCover(isPresented: $showIt){ ChildView() .overlay( Button("Close"){ showIt = false }.padding(20), alignment: .bottomLeading ) } .ignoresSafeArea(.all) } } import ARKit import RealityKit import SwiftUI struct ChildView: View{ var body: some View{ RealityView{content in content.camera = .spatialTracking } } }
5
1
2.0k
1w
Compensating for IMU (accelerometer) thermal drift - getting device temperature?
I’m running into a hardware reality. MEMS sensor thermal drift. If a user zeroes out the tilt indoors at 20°C and then takes the phone outside in the cold, the accelerometer baseline shifts just enough as the device cools to throw off the readings. I want to apply a simple thermal compensation curve to the CoreMotion data to keep the "zero" perfectly level regardless of the weather. However, ProcessInfo.thermalState only gives broad buckets (nominal, fair, etc.) which doesn't help me calculate a continuous offset for a phone cooling down degree by degree. Is there any public API, or even a proxy metric, that can give me a rough battery or internal temperature integer? I don’t need high resolution decimals. Just a general device temp to offset the hardware drift. Any undocumented tricks or proxy metrics anyone has used to handle this?
1
0
141
4w
How does ARKit achieve low-latency and stable head tracking using only RGB camera ?
Hi, I’m working on a real-time head/face tracking pipeline using a standard 2D RGB camera, and I’m trying to better understand how ARKit achieves such stable and responsive results in comparable conditions. To clarify upfront: I’m specifically interested in RGB-only tracking and the underlying vision/ML pipeline. I’m not using TrueDepth or any depth/IR-based sensors, and I’d like to understand how similar stability and responsiveness can be achieved under those constraints. In my current setup, I estimate head pose from RGB frames (facial landmarks + PnP) and apply temporal filtering (e.g., exponential smoothing and Kalman filtering). This significantly reduces jitter, but introduces noticeable latency, especially during faster head movements. What stands out in ARKit is that it appears to maintain both: Very low jitter Very low perceived latency even when operating with camera input alone. I’m trying to understand what techniques might contribute to this behavior. In particular: Does ARKit use predictive tracking (e.g., velocity or acceleration-based pose extrapolation) to compensate for camera and processing delays in RGB-only scenarios? Are there recommended strategies for balancing temporal smoothing and responsiveness without introducing visible lag in camera-based pose estimation pipelines? Is the tracking pipeline internally decoupled from rendering (e.g., asynchronous processing with prediction applied at render time)? Are there general best practices for minimizing end-to-end latency in vision-based head tracking systems beyond standard filtering approaches? I understand that implementation details may not be public, but any high-level insights or pointers would be greatly appreciated. Thanks!
0
0
216
Mar ’26
ARSession Error: Required sensor failed
Hi everyone, I’m currently using the RoomPlan API, which has been working reliably until recently. However, I’ve started encountering an intermittent error and I’m trying to understand what might be causing it. The error is triggered in the ARSession observer method: session(_ session: ARSession, didFailWithError error: Error) It has occurred on at least two devices: iPhone 14 Pro iPhone 17 Pro Here’s the full error message: ARSession failed domain=com.apple.arkit.error code=102 desc=Required sensor failed. userInfo=["NSLocalizedFailureReason": A sensor failed to deliver the required input., "NSUnderlyingError": Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}, "NSLocalizedDescription": Required sensor failed.] This seems to indicate that a required sensor (likely LiDAR or camera) failed to provide input, but I’m not sure what’s causing it or why it happens only occasionally. Has anyone experienced something similar or has insight into possible causes or fixes? Thanks in advance!
0
0
241
Mar ’26
RealityKit equivalent of ARGeoAnchor?
In ARKit there is ARGeoAnchor, which lets you anchor content using latitude and longitude so objects stay fixed to a real-world location. Is there an equivalent feature in RealityKit? I want to place points in the world and make sure they don't move or drift after placement. If RealityKit doesn't support this directly, what is the recommended approach?
0
1
516
Mar ’26
RealityKit - Full 3D experience
I have a question I guess more for the Apple team. But why are there no totally 3D experiences for the Vision Pro lineup? I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro? Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games. Is this something Apple is thinking about implementing?
1
0
1.4k
Mar ’26
Every Unity project with ARKit doesn't open in Testflight
Even when making a completely fresh Unity project, if I include ARKit and upload it to testflight, when I open it on my ipad it says 'Couldn't Load App'. If I don't include testflight the app opens fine. If I build the app directly to my ipad the app works fine. It opens if I set ARKit to 'optional' but then the AR doesn't work. I have included a camera and location usage description so it's not that. Really not sure what's going on, any help would be massively appreciated!
2
0
111
Mar ’26
Images added in Reality Composer look darker in AR
I’m working with Reality Composer and noticed that images added directly to a scene appear significantly darker when viewed in AR. This seems different from how other objects in the scene respond to lighting, especially under varying real-world light conditions. Is this expected behavior? Are images treated with a different lighting model in Reality Composer? Is there any recommended way to get more consistent light response for image-based artworks?
Topic: Design SubTopic: General Tags:
1
0
547
Mar ’26
Reality Kit Scene
Hi, I’m wondering whether RealityKit has its own scene management system, since it uses ARView (backed by ARKit) to present AR content. Does RealityKit manage scenes independently, or does it rely entirely on ARKit’s scene handling? Thank you.
1
0
174
Feb ’26
Attaching a hand model to your hands
Hi, we have been working on an application that attaches a hand model to the users hands. Apple provides an animating hand models in visionOS project that is a useful starting point. https://aninterestingwebsite.com/documentation/visionOS/animating-hand-models-in-visionOS We have been trying to create our own hand model to attach but have had some issues with how it is attaching to the hand. For our hand model we want to include the forearm all the way up to the users elbow. I have attached a sample project of what our code currently looks like so you can run it. Just select show immersive space to attach the models. The left hand model is the space glove that we were trying to mirror. The right hand model is our model that we have been using. I have mapped each of the joints to the pertaining joint name on our model. The first issue we are having seems to be based around the placement of the forearm. It attaches itself at the wrist. The second issue seems to be around rotation. Our team is looking for some guidance on what needs to change in order to map this model correctly. Thanks in advance!
2
0
334
Feb ’26
ARSkeleton3D modelTransform always return nil
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil. https://aninterestingwebsite.com/documentation/arkit/arskeleton3d/modeltransform(for:) For example bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint")) bodyAnchor is ARBodyAnchor. I see the default skeleton on the screen, but now I can't get the coordinates out of it. I'm using an example from Apple's WWDC presentation. https://aninterestingwebsite.com/documentation/arkit/capturing-body-motion-in-3d Are there any changes in the API? Or just bug?
6
0
953
Feb ’26
ARKit Face Tracking works in total darkness?
I’ve seen, mainly in discussions with AIs, that ARFaceTrackingConfiguration uses the same technology as Face ID and therefore should work in complete darkness. However, I haven’t been able to achieve this. Does anyone know if this is actually true? I'm using an iPhone 16 to test, and the Face ID works well in darkness.
0
0
145
Feb ’26
How to visualize AR-dependent app if not supported on Simulator for Swift Student Challenge?
Recently, applications for the Swift Student Challenge opened up. I noticed, that when selected "where to run your app" (mine was developed in Xcode 26), and you select Xcode26, there is a note underneath it that basically says all Xcode projects will be run on the simulator. What if my project is dependent on AR? How would I let the judges test my submission?
2
0
314
Feb ’26
Showing 3D/AR content on multiple pages in iOS with RealityView
Hi. I'm trying to show 3D or AR content in multiple pages on an iOS app but I have found that if a RealityView is used earlier in a flow then future uses will never display the camera feed, even if those earlier uses do not use any spatial tracking. For example, in the following simplified view, the second page realityView will not display the camera feed even though the first view does not use it not start a tracking session. struct ContentView: View { var body: some View { NavigationStack { VStack { RealityView { content in content.camera = .virtual // showing model skipped for brevity } NavigationLink("Second Page") { RealityView { content in content.camera = .spatialTracking } } } } } } What is the way around this so that 3D content can be displayed in multiple places in the app without preventing AR content from working? I have also found the same problem when wrapping an ARView for use in SwiftUI.
0
0
377
Feb ’26
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
3
0
1.5k
Jan ’26
How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
1
0
568
Jan ’26
Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
Replies
1
Boosts
0
Views
291
Activity
1d
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
Replies
3
Boosts
1
Views
893
Activity
5d
RealityView camera feed not shown
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen. If I show ChildView directly, it works with camera feed. Please help me on this issue? Thanks. import RealityKit import SwiftUI struct ParentView: View{ @State private var showIt = false var body: some View{ ZStack{ RealityView{content in content.camera = .virtual let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)]) content.add(box) } Button("Click here"){ showIt = true } } .fullScreenCover(isPresented: $showIt){ ChildView() .overlay( Button("Close"){ showIt = false }.padding(20), alignment: .bottomLeading ) } .ignoresSafeArea(.all) } } import ARKit import RealityKit import SwiftUI struct ChildView: View{ var body: some View{ RealityView{content in content.camera = .spatialTracking } } }
Replies
5
Boosts
1
Views
2.0k
Activity
1w
Compensating for IMU (accelerometer) thermal drift - getting device temperature?
I’m running into a hardware reality. MEMS sensor thermal drift. If a user zeroes out the tilt indoors at 20°C and then takes the phone outside in the cold, the accelerometer baseline shifts just enough as the device cools to throw off the readings. I want to apply a simple thermal compensation curve to the CoreMotion data to keep the "zero" perfectly level regardless of the weather. However, ProcessInfo.thermalState only gives broad buckets (nominal, fair, etc.) which doesn't help me calculate a continuous offset for a phone cooling down degree by degree. Is there any public API, or even a proxy metric, that can give me a rough battery or internal temperature integer? I don’t need high resolution decimals. Just a general device temp to offset the hardware drift. Any undocumented tricks or proxy metrics anyone has used to handle this?
Replies
1
Boosts
0
Views
141
Activity
4w
How does ARKit achieve low-latency and stable head tracking using only RGB camera ?
Hi, I’m working on a real-time head/face tracking pipeline using a standard 2D RGB camera, and I’m trying to better understand how ARKit achieves such stable and responsive results in comparable conditions. To clarify upfront: I’m specifically interested in RGB-only tracking and the underlying vision/ML pipeline. I’m not using TrueDepth or any depth/IR-based sensors, and I’d like to understand how similar stability and responsiveness can be achieved under those constraints. In my current setup, I estimate head pose from RGB frames (facial landmarks + PnP) and apply temporal filtering (e.g., exponential smoothing and Kalman filtering). This significantly reduces jitter, but introduces noticeable latency, especially during faster head movements. What stands out in ARKit is that it appears to maintain both: Very low jitter Very low perceived latency even when operating with camera input alone. I’m trying to understand what techniques might contribute to this behavior. In particular: Does ARKit use predictive tracking (e.g., velocity or acceleration-based pose extrapolation) to compensate for camera and processing delays in RGB-only scenarios? Are there recommended strategies for balancing temporal smoothing and responsiveness without introducing visible lag in camera-based pose estimation pipelines? Is the tracking pipeline internally decoupled from rendering (e.g., asynchronous processing with prediction applied at render time)? Are there general best practices for minimizing end-to-end latency in vision-based head tracking systems beyond standard filtering approaches? I understand that implementation details may not be public, but any high-level insights or pointers would be greatly appreciated. Thanks!
Replies
0
Boosts
0
Views
216
Activity
Mar ’26
ARSession Error: Required sensor failed
Hi everyone, I’m currently using the RoomPlan API, which has been working reliably until recently. However, I’ve started encountering an intermittent error and I’m trying to understand what might be causing it. The error is triggered in the ARSession observer method: session(_ session: ARSession, didFailWithError error: Error) It has occurred on at least two devices: iPhone 14 Pro iPhone 17 Pro Here’s the full error message: ARSession failed domain=com.apple.arkit.error code=102 desc=Required sensor failed. userInfo=["NSLocalizedFailureReason": A sensor failed to deliver the required input., "NSUnderlyingError": Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}, "NSLocalizedDescription": Required sensor failed.] This seems to indicate that a required sensor (likely LiDAR or camera) failed to provide input, but I’m not sure what’s causing it or why it happens only occasionally. Has anyone experienced something similar or has insight into possible causes or fixes? Thanks in advance!
Replies
0
Boosts
0
Views
241
Activity
Mar ’26
RealityKit equivalent of ARGeoAnchor?
In ARKit there is ARGeoAnchor, which lets you anchor content using latitude and longitude so objects stay fixed to a real-world location. Is there an equivalent feature in RealityKit? I want to place points in the world and make sure they don't move or drift after placement. If RealityKit doesn't support this directly, what is the recommended approach?
Replies
0
Boosts
1
Views
516
Activity
Mar ’26
RealityKit - Full 3D experience
I have a question I guess more for the Apple team. But why are there no totally 3D experiences for the Vision Pro lineup? I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro? Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games. Is this something Apple is thinking about implementing?
Replies
1
Boosts
0
Views
1.4k
Activity
Mar ’26
Every Unity project with ARKit doesn't open in Testflight
Even when making a completely fresh Unity project, if I include ARKit and upload it to testflight, when I open it on my ipad it says 'Couldn't Load App'. If I don't include testflight the app opens fine. If I build the app directly to my ipad the app works fine. It opens if I set ARKit to 'optional' but then the AR doesn't work. I have included a camera and location usage description so it's not that. Really not sure what's going on, any help would be massively appreciated!
Replies
2
Boosts
0
Views
111
Activity
Mar ’26
Images added in Reality Composer look darker in AR
I’m working with Reality Composer and noticed that images added directly to a scene appear significantly darker when viewed in AR. This seems different from how other objects in the scene respond to lighting, especially under varying real-world light conditions. Is this expected behavior? Are images treated with a different lighting model in Reality Composer? Is there any recommended way to get more consistent light response for image-based artworks?
Topic: Design SubTopic: General Tags:
Replies
1
Boosts
0
Views
547
Activity
Mar ’26
Reality Kit Scene
Hi, I’m wondering whether RealityKit has its own scene management system, since it uses ARView (backed by ARKit) to present AR content. Does RealityKit manage scenes independently, or does it rely entirely on ARKit’s scene handling? Thank you.
Replies
1
Boosts
0
Views
174
Activity
Feb ’26
Attaching a hand model to your hands
Hi, we have been working on an application that attaches a hand model to the users hands. Apple provides an animating hand models in visionOS project that is a useful starting point. https://aninterestingwebsite.com/documentation/visionOS/animating-hand-models-in-visionOS We have been trying to create our own hand model to attach but have had some issues with how it is attaching to the hand. For our hand model we want to include the forearm all the way up to the users elbow. I have attached a sample project of what our code currently looks like so you can run it. Just select show immersive space to attach the models. The left hand model is the space glove that we were trying to mirror. The right hand model is our model that we have been using. I have mapped each of the joints to the pertaining joint name on our model. The first issue we are having seems to be based around the placement of the forearm. It attaches itself at the wrist. The second issue seems to be around rotation. Our team is looking for some guidance on what needs to change in order to map this model correctly. Thanks in advance!
Replies
2
Boosts
0
Views
334
Activity
Feb ’26
ARSkeleton3D modelTransform always return nil
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil. https://aninterestingwebsite.com/documentation/arkit/arskeleton3d/modeltransform(for:) For example bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint")) bodyAnchor is ARBodyAnchor. I see the default skeleton on the screen, but now I can't get the coordinates out of it. I'm using an example from Apple's WWDC presentation. https://aninterestingwebsite.com/documentation/arkit/capturing-body-motion-in-3d Are there any changes in the API? Or just bug?
Replies
6
Boosts
0
Views
953
Activity
Feb ’26
Can I use the Camera API to shoot pictures with the wide camera, while AR is running on the main camera
I want to: Run ARKit on the main rear camera, and while it's running shoot high resolution pictures on the wide camera, without disturbing the AR tracking. Is this possible?
Replies
0
Boosts
0
Views
397
Activity
Feb ’26
Full Body Tracking
Hi, is there a way to track feet in a visionOS app in an immersive space? I want the whole body to be visible in VR, and I want to know if the user touches an object with their foot.
Replies
1
Boosts
0
Views
398
Activity
Feb ’26
ARKit Face Tracking works in total darkness?
I’ve seen, mainly in discussions with AIs, that ARFaceTrackingConfiguration uses the same technology as Face ID and therefore should work in complete darkness. However, I haven’t been able to achieve this. Does anyone know if this is actually true? I'm using an iPhone 16 to test, and the Face ID works well in darkness.
Replies
0
Boosts
0
Views
145
Activity
Feb ’26
How to visualize AR-dependent app if not supported on Simulator for Swift Student Challenge?
Recently, applications for the Swift Student Challenge opened up. I noticed, that when selected "where to run your app" (mine was developed in Xcode 26), and you select Xcode26, there is a note underneath it that basically says all Xcode projects will be run on the simulator. What if my project is dependent on AR? How would I let the judges test my submission?
Replies
2
Boosts
0
Views
314
Activity
Feb ’26
Showing 3D/AR content on multiple pages in iOS with RealityView
Hi. I'm trying to show 3D or AR content in multiple pages on an iOS app but I have found that if a RealityView is used earlier in a flow then future uses will never display the camera feed, even if those earlier uses do not use any spatial tracking. For example, in the following simplified view, the second page realityView will not display the camera feed even though the first view does not use it not start a tracking session. struct ContentView: View { var body: some View { NavigationStack { VStack { RealityView { content in content.camera = .virtual // showing model skipped for brevity } NavigationLink("Second Page") { RealityView { content in content.camera = .spatialTracking } } } } } } What is the way around this so that 3D content can be displayed in multiple places in the app without preventing AR content from working? I have also found the same problem when wrapping an ARView for use in SwiftUI.
Replies
0
Boosts
0
Views
377
Activity
Feb ’26
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
Replies
3
Boosts
0
Views
1.5k
Activity
Jan ’26
How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
Replies
1
Boosts
0
Views
568
Activity
Jan ’26