visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Should we design for Liquid Glass on visionOS?
Liquid Glass was introduced as a universal design language for all platforms, but isn't supported by visionOS 26 beta. For a small team creating a visionOS app targeted for release in fall 2026, should we focus our design work on Liquid Glass, or for the existing visionOS design language?
Topic: Design SubTopic: General Tags:
1
1
135
Jun ’25
visionOS 26.0 beta does not call .onTapGesture
Prior to visionOS 2.5, .onTapGesture was called with the following structure, but in visionOS 26.0 beta, it is no longer called. Is .onTapGesture deprecated in visionOS 26.0 and above? Or is it a bug? TabView(selection: $selectedTab) { WebViewView(selectedTab: $selectedTab) .onTapGesture { viewModel.userDidInteract = true } }
2
0
62
Jun ’25
Vision OS: HUD mode windows
If I correctly understand, a new Enterprise API has been introduced In visionOS 26 allowing to fix windows to the user frame of reference, implementing a something like an "head up display", with the window tracking the user movements. Is this API only available to enterprise applications, and if so is there a plan to make it available for every kind of app?
3
0
50
Jun ’25
For a third year, no screenshot capability for immersive visionOS apps... here's a workaround?
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is ask the user to take a screenshot wait until the user taps a button indicating the screenshot has been taken then the app asks the user to select the screenshot when the app opens the PhotoPicker when the user presses Done, the screenshot is handed off to the app. One wonders why there is no Apple Api for doing this in a simple privacy protective way such as: When called, the Apple api captures the screenshot in Apple secured memory The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to a. share this screenshot with the app, or b. cancel, c. retake the screenshot If the user approves, the app receives the screenshot
3
0
53
Jun ’25
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
4
0
133
4w
visionOS Simulator: CloudKitWrapper not found
Hello, I'm working on a Unity game which uses Apple Arcade Cloudkit Unity plugin. Cloud save works on all platforms except visionOS. I tried to debug using visionOS 2.4 Simulator. When the game starts XCode display the following error: DllNotFoundException: Unable to load DLL 'CloudKitWrapper'. Tried the load the following dynamic libraries: Unable to load dynamic library '/CloudKitWrapper' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/CloudKitWrapper, 0x0005): tried: '/Users/seb/Library/Developer/Xcode/DerivedData/Unity-VisionOS-akwybgjotadlwrghmmfkhbhpuduf/Build/Products/Debug-xrsimulator/CloudKitWrapper' (no such file), '/Library/Developer/CoreSimulator/Volumes/xrOS_22O237/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.4.simruntime/Contents/Resources/RuntimeRoot/usr/lib/system/introspection/CloudKitWrapper' (no such file), '/Library/Developer/CoreSimulator/Volumes/xrOS_22O237/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.4.simruntime/Contents/Resources/RuntimeRoot/CloudKitWrapper' (no such file), '/CloudKitWrapper' (no such file) at Apple.CloudKit.CKContainer.CKContainer_Default () [0x00000] in <00000000000000000000000000000000>:0 at Apple.CloudKit.CKContainer.Default () [0x00000] in <00000000000000000000000000000000>:0 I opened up the "Debug-xrsimulator" and indeed there is no CloudKitWrapper. However, if I "show content" on the app and navigate to the "Frameworks" folder, all Apple Arcade plugins are here, including CloudKit. I guess the plugin is in the right location, but the code tries to load it from the wrong path.
2
0
72
Jun ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
1
0
139
Jun ’25
Launching a Unity fully immersive game from SwiftUI
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services. I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console: AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
2
0
86
Jun ’25
Issue: Closing Bounded Volume Never Re-Opens
Greetings. I am having this issue with a Unity Polyspatial VisionOS app. We have our main Bounded Volume for our app. We have other Native UI windows that appear when we interact with objects in our Bounded Volume. If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't. If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume. What solutions are there to make sure our Bounded Volume always appears when the app is open?
1
0
75
Jun ’25
CoreBluetooth on vision os cannot connect 3 or more devices.
I try to use CoreBluetooth api on my cus app on vision os. I could connect to two devices on my app, but couldn’t with 3 or more device. Despite connecting the third device using this api, the function does not return anything. When two devices are connected on bluetooth setting, I see the same situation on my custom app. However, I could connect 3 or more devices on the default blu setting. Is there anyone who has similar problem?
3
0
72
May ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
98
May ’25