Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Creating a voxel mesh and render it using metal within a RealityKit ImmersiveView
Hi everyone, I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here: https://apps.apple.com/us/app/arcade-topology/id6742103633 The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop. Thanks! —Alejandro
3
0
85
Jun ’25
Issue: Closing Bounded Volume Never Re-Opens
Greetings. I am having this issue with a Unity Polyspatial VisionOS app. We have our main Bounded Volume for our app. We have other Native UI windows that appear when we interact with objects in our Bounded Volume. If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't. If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume. What solutions are there to make sure our Bounded Volume always appears when the app is open?
1
0
77
Jun ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
1
0
139
Jun ’25
Launching a Unity fully immersive game from SwiftUI
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services. I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console: AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
2
0
86
Jun ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
98
May ’25
Accessing LiDAR Depth Data and Scene Reconstruction on Apple Vision Pro
Hello, I'm developing a visionOS application for Apple Vision Pro that aims to scan unknown physical objects, capture their 3D data (such as meshes or point clouds), and export them as 3D models. Ideally, I'd also like to visualize these reconstructions in real-time within the headset. This functionality is similar to what's available in Reality Composer on iPad and iPhone, but I'm seeking to implement it natively on Vision Pro. I've reviewed the visionOS documentation but haven't found clear guidance on accessing LiDAR depth data or performing scene reconstruction. Specifically, I'm interested in: 1.Accessing LiDAR or depth data from Vision Pro's sensors. 2.Utilizing ARKit's scene reconstruction capabilities on visionOS. 3.Exporting captured 3D data as models (e.g., USDZ or OBJ formats). Are there APIs or frameworks in visionOS that support these features?
0
1
94
May ’25
How to get the transform of the joint in a skeleton when it is Animating
I have an entity that was created using Mixamo, and it has an animation. after the animation completes the mesh of the robot is not where the entity is positioned. I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations. Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
0
0
83
May ’25
TextComponent on iOS/macOS pixelated when viewed from short distance
Hello, I've been tinkering a bit with TextComponent. Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets: RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location. And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture. Can anyone tell me if this is expected behavior or a bug? Here two screenshots for comparison (iPhone and Vision Pro): Thanks!
0
0
69
May ’25
How to make .blur(radius:) visually affect RealityView content?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred. Here’s the test code: struct ContentView: View { var body: some View { VStack(spacing: 20) { Text("Above the RealityView") .font(.title) RealityView { content, attachments in if let text = attachments.entity(for: "2dView") { text.position.y = 0.1 content.add(text) } let box = ModelEntity( mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)] ) content.add(box) } attachments: { Attachment(id: "2dView") { Text("Above the Box") .font(.title) } } .frame(width: 300, height: 300) .border(.blue) .blur(radius: 99) // Has no visual effect Text("Below the RealityView") .font(.subheadline) } .padding() } } My question: How can I make .blur(radius:) visually affect the content rendered in a RealityView? Can you provide a working example that .blur() to visually affect any part of a RealityView? Thanks!
0
0
57
May ’25
How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
0
0
70
May ’25
Collision Detection Fails After Anchoring ModelEntity to Hand in VisionOS
I'm starting my journey in developing an immersive app for VisionOS. I've been making steady progress, but I've encountered a specific challenge that I haven't been able to resolve. I created two ModelEntity objects — a sphere and a cube — and added a DragGesture to the cube. When I drag the cube over the sphere, the two collide correctly, and the collision is logged in the console. So far, everything works as expected. However, when I try to anchor the cube to my hand, the collision stops working. It's as if the cube loses its ability to detect collisions once it's anchored. Any guidance or clarification on this behavior would be greatly appreciated. //  ImmersiveView.swift //  estudos_vision // //  Created by Lailan Rogerio Rodrigues Matos on 15/05/25. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @Environment(AppModel.self) var appModel @State private var session: SpatialTrackingSession? @State private var box = ModelEntity() @State private var subs: [EventSubscription] = [] @State private var ballEntity: Entity? var body: some View { RealityView { content in // Load initial content from the RealityKit scene. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } // Create and run a spatial tracking session. let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) _ = await session.run(configuration) self.session = session // Create a red box. let boxMesh = MeshResource.generateBox(size: 0.2) let material = SimpleMaterial(color: .red, isMetallic: false) box = ModelEntity(mesh: boxMesh, materials: [material]) box.position.y += 0.15 // Position the box slightly above the origin. // Configure the box for user interaction and physics. box.components.set(InputTargetComponent(allowedInputTypes: .indirect)) // Make it interactive. box.generateCollisionShapes(recursive: false) // Generate collision shapes for physics. box.components.set(PhysicsBodyComponent( // Add physics behavior. massProperties: .default, material: .default, mode: .kinematic // Use kinematic mode so it can be moved by user interaction. )) box.components.set(GroundingShadowComponent(castsShadow: true)) // Add a shadow. //content.add(box) //commented out to add to hand anchor // Create a left hand anchor and add the box as a child. let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) handAnchor.addChild(box) content.add(handAnchor) // Add the hand anchor to the scene. // Create a sphere. let ball = ModelEntity(mesh: .generateSphere(radius: 0.15)) ball.position = [0.0, 1.5, -1.0] // Initial position of the ball. ball.generateCollisionShapes(recursive: false) // Add collision. ball.name = "Sphere" content.add(ball) ballEntity = ball // Subscribe to collision events between the box and other entities. let event = content.subscribe(to: CollisionEvents.Began.self, on: box) { ce in print("Collision between \(ce.entityA.name) and \(ce.entityB.name) occurred") //ce.entityA.removeFromParent() // removes the colliding object //ce.entityB.removeFromParent() } Task { subs.append(event) } } // Add a drag gesture to the box, allowing the user to move it. .gesture( DragGesture() .targetedToEntity(box) // Target the drag gesture to the box. .onChanged({ value in // Update the position of the box based on the drag gesture. box.position = value.convert(value.location3D, from: .local, to: box.parent!) }) ) } } #Preview(immersionStyle: .full) { ImmersiveView() .environment(AppModel()) }
1
0
69
May ’25
The folding and unfolding effect of the NBA sand table
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
1
0
38
May ’25
How to implement the semi-transparent overlay effect in Immersive View?
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced. May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
0
0
84
May ’25
Is `ParticleEmitterComponent` implemented for `RealityKit` on iOS?
Hi there, I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
1
0
105
May ’25
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
0
0
91
May ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
65
May ’25
Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
0
0
50
Apr ’25