A Summary of the WWDC25 Group Lab - visionOS

This thread has been locked by a moderator; it no longer accepts new replies.

At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for visionOS.

I saw that there is a new way to add SwiftUI View attachments in my RealityView, what advantages does this have over the old way?

Attachments can now be added directly to your entities with ViewAttachmentComponent. The removes the need to declare your attachments upfront in your RealityView initializer and then add those attachments as child entities. The new approach provides greater flexibility. Canyon Crosser and Petite Asteroids both utilize the new approach.

ManipulationComponent looks really cool! Right now my app has a series of complicated custom gestures. What gestures does it handle for me exactly, and are there any situations where I should prefer my own custom gestures?

ManipulationComponent provides natural interaction with virtual objects. It seamlessly handles translation and rotation. You can easily add manipulation to a SwiftUI view like Model3D with the manipulable view modifier.

The new Object Manipulation API is great for most apps, and is a breeze to implement, but sometimes you might want a more custom feel, and that’s ok! Custom gestures are still fully supported for that scenario.

I saw that there is a new API to also access the right main camera. What can I do with this?

Correct, in visionOS 26, you can access the left and right main cameras. You can even access them simultaneously as a stereo pair. Camera access still requires a managed entitlement and an enterprise license, see Accessing the main camera for more details about those requirements.

More computer vision and machine learning use-cases are unlocked with access to both cameras, we are excited to see what you will do!

What do I need to do to add spatial accessory input for my app?

First, use the GameController framework to establish a connection with the spatial accessory, and then listen for events from the controller. Then, you can use either RealityKit, ARKit, or a combination of both to track the accessory, anchor virtual content to it, and fine tune the accessory interaction with the content in your app.

For more details, check out Discovering and tracking spatial game controllers and styli.

By far, the most difficulty with implementing visionOS apps is SwiftUI window management…placing, opening, closing, etc. Are there any improvements to window management in visionOS 26?

Yes! We recommend watching Set the scene with SwiftUI in visionOS.

You can use the defaultLaunchBehavior to choose whether a particular window is presented (or suppressed) at launch. You can also prevent a window like a secondary toolbar from launching as the initial window using .restorationBehavior(.disabled). Adopting best practices for persistent UI provides a great overview of SwiftUI window management on visionOS.

As for placing windows, there is still no API for an app to specify the placement of its windows other than relative placement. If that is a feature you are interested in, please file an enhancement request for it using Feedback Assistant!

How to get access to the Enterprise API?

First, request the entitlement and license through your Apple Developer or enterprise account. Once these have been granted, include the license and entitlement in your project. Then you can build, test, and distribute as an in-house app.

Boost

(Continued)

Building spatial experiences for business apps with enterprise APIs for visionOS provides more detail on this!

Is there a way to close an app in the VisionOS without going to Force Quit on the Apple vision Pro?

Great visionOS apps work the same way whether they’ve just been launched or brought forward. People can close windows that they aren’t using, and the system will keep the app running if it’s best for performance.

I’m creating an app that has a lot of 3d models. What’s the max file size for a VisionOS app? Also, what’s a good target for max CPU and Memory usage? Just want to make sure my app doesn’t exceed any maximums. Thanks! 😊

4 GB is the maximum app size. Check out Background Assets if you’d like to download additional resources for your app after the initial installation.

There is a 5 GB memory limit, but we recommend that you keep a good amount of headroom. The less memory, CPU, and GPU you use, the better. See Reducing CPU Utilization in Your RealityKit App, and Reducing GPU Utilization in your RealityKit App.

You can also use RealityKit trace to identify performance problems in your app. Watch the video Meet RealityKit Trace to learn more about how to use RealityKit Trace.

Will Icon Composer work for creating visionOS icons?

We recommend that you use Xcode’s asset catalog to create visionOS icons. You can use the Parallax Previewer to preview your icon. You should also file an enhancement request for visionOS icon support in Icon Composer using Feedback Assistant.

Hello, Thank you so much for your incredible work and for this session! My question is about the new Shared Experiences feature on Apple Vision Pro. Do we still need to integrate SharePlay in order to support this functionality? Even while being in the same room? Thank you so much, Sayan 😊 🤩

SharePlay via the GroupActivities framework is the best way to share visionOS experiences with nearby people. By adopting SharePlay your app will support sharing with both nearby and remote people who appear as spatial Personas.

Non-immersive, windowed apps can be shared with nearby people via window mirroring without adopting SharePlay, and enterprise apps can use ARKit apis to build locally shared experiences with their own infrastructure.

Watch Share visionOS experiences with nearby people to learn more about nearby sharing.

How do you make the live widgets in Xcode that look like they go into the wall?

Make a widget extension, add it to your visionOS target, and specify the recessed mounting style as a supported mounting style.

Design widgets for visionOS, What’s new in widgets, and Updating your widgets for visionOS are all great resources to learn more here!

(Continued)

Are there best practices for taking an existing swiftUI app and making a two dimentional visionOS app?

Compatible iOS apps can run on visionOS, see Making your existing app compatible with visionOS.

If you want to go further than compatibility, you can add visionOS as a platform to your Xcode project’s target, and develop with the visionOS Simulator. Principles of spatial design, Design for spatial user interfaces, Design hover interactions for visionOS, and Bringing your existing apps to visionOS will provide you with lots of best practices to apply!

I’m aware that baked lighting is often recommended, but is there a supported way to use dynamic lights and shadows on Apple Vision Pro—especially when working natively with RealityKit or Reality Composer Pro? If so, what are the limitations or best practices?

RealityKit offers dynamic lights with shadows, limited to 8 dynamic lights at a time in a scene. You can use ShaderGraph to combine baked lighting and RealityKit’s dynamic lights, reference Petite Asteroids as an example that combines baked lighting with shader effects to approximate real-time shadows.

ImageBasedLightComponent can also approximate environment reflections and ambient light for your scene.

Hey, Thank you so much for your work and this lab! I have a quick question regarding a visual effect. Can you please explain how to make the 'projector beam' around the photo when viewing spatial photos on Vision Pro?

The effect you are describing is what we would call a “feathered” effect. You can use the ImagePresentationComponent with the spatial3D and spatial3DImmersive viewing modes to see this effect.

I’m looking to bring a simple, flat iOS app into visionOS. What are the most important shifts in thinking or design I should keep in mind?

You should consider how your app will work with indirect gestures and hover effects. You can also think about bringing more depth to your app with SwiftUI or RealityKit. See Principles of spatial design, Design for spatial user interfaces, and Design hover interactions for visionOS, which raise lots of important design considerations!

SharePlay with nearby users. Is the scene by default at the same place for all nearby users? When someone recenter the scene position with Digital Crown does it reposition the scene around him for every nearby user ? And stays same for remote ?

Yes, the scene associated with your activity will always be in the same place for all participants in the activity. Participants can point and gesture at the scene and have a shared spatial context. When someone recenters the scene it repositions for all participants. The recentering behavior is different depending on if the session includes remote participants or not. In all cases, recentering will place the scene in a group-centric position that will be best for the group as a whole, and different from where the system would position the scene outside of SharePlay. This behavior is covered in Share visionOS experiences with nearby people.

What does ARKit gain you compared to just sticking with RealityKit for mixed immersive apps? It seems like the gap is narrower now since VisionOS 1

ARKit provides you with data streams for your desired data types. It is necessary for specific types of data that would otherwise be inaccessible with only RealityKit, like scene reconstruction, hand tracking, or object tracking via Reference Objects. Note that ARKit access requires an immersive space to be open.

RealityKit can incorporate hand input using Gestures, and can be used to visually track an anchor using an entity. ARKitAnchorComponent allows you to get the ARKit anchor from a RealityKit AnchorEntity.

Apps that use Compositor Services and Metal for rendering, or SwiftUI only apps that don’t use RealityKit can use ARKit for access to various data streams.

When creating an immersive game with nearby players, do I use SharePlay to share game state, or do I need to set up another communication channel for that?

Yes, you should use SharePlay to share game state. You can use the GroupSessionMessenger API to send state update messages between participants. Building a guessing game for visionOS does exactly this!

Can normal SwiftUI views also make use of the features that widgets have? Like sticking in position and can be attached to the wall?

Any window can snap to surfaces! The backs of windows will snap to vertical surfaces, and the bottom of volumetric windows will snap to horizontal surfaces. You can use ARKit or RealityKit in an immersive space to build your own snapping behaviors.

(Continued)

Will we be able to file for entitlements to get access to CoreML on VisionOS (I’m trying to do handwriting recognition, which works on iOS)? I’ve been told it’s a “no?” What other ways can Apple Intelligence be integrated into VisionOS (voice/speech)?

You can use Core ML on visionOS without an entitlement (the model will execute on the CPU/GPU). Apple Neural Engine access requires a license and entitlement. The Foundation Models framework provides access to an on-device LLM.

For nearby SharePlay, is there a maximum number of allowed local participants?

visionOS allows you to share experiences with up to four other spatial participants. Those participants can be a mix of remote participants appearing as spatial Personas and participants who are nearby with you.

What are the performance impacts of constructing entities and scenes with reality composer pro vs in code?

Entities and scenes built with RCP are expected to have the same performance characteristics as those instantiated in code.

The contents of an RCP packages is imported into the app versus instantiated, and the import pipeline is very efficient. Sometimes an RCP project will specify additional processing on entities to optimize performance e.g. texture compression.

To mix RCP Entities with instantiated ones the instantiated Entity needs components that implement the Codable protocol in order to be serialized by RealityKit.

See Petite Asteroids: Building a volumetric visionOS game for an example of how to load assets from a Reality Composer Pro package.

best practices for having other users in your SharePlay interact with objects not just see the host move them around

The TabletopKit framework supports this out-of-the-box and is a great option for shared interactions.

Apps can also use the GroupSessionMessenger in coordination with something like the new manipulable APIs to build their own solution for synchronizing objects.

Is there technique or api for the progressive WindowStyle of the Immersive space To be a different shape or have a masking effect ?

ProgressiveImmersionAspectRatio properties such as portrait and landscape modes can be used to change shape and are available in SwiftUI on visionOS.

For more general masking check out drawMaskOnStencilAttachment(commandEncoder:value:).

Can you use visionOS to identify and annotate features outdoors like a garden

You can use visionOS object tracking for recognizing real world objects and features which are expected to support objects as large as a car.

See Implementing object tracking in your visionOS app for details.

(Continued)

Is there a tool to create a 3D graphs, which got introduced in recent updates?

We recommend the Swift Charts framework and the related WWDC session Bring Swift Charts to the third dimension.

What are your advices/recommendations, if any, on how to make the visionOS apps compatible with iPad, iOS, Mac?

Discover RealityKit APIs for iOS, macOS, and visionOS focuses on cross-platform RealityKit API you can use to build responsive and compatible apps across multiple platforms.

Are going to be any new enterprise API entitlements shipped with VisionOS 26?

App-Protected Content Follow Mode for Windows Camera Region access Main camera access (access to right and Stereoscopic camera unlocked this year) SharedCoordinateSpaceProvider

See Explore enhancements to your spatial business app.

Is there an API in SwiftUI that will continually face the device or is UIKit Billboard the only way?

ViewAttachmentComponent and BillboardComponent can achieve this in RealityKit with an attached Swift UI element. See Petite Asteroids: Building a volumetric visionOS game for an example in the dialog bubbles.

Can we use persona outside shareplay? webxr for example?

Personas are available on FaceTime and SharePlay by design.

A Summary of the WWDC25 Group Lab - visionOS
 
 
Q