I am trying to display a 3D model in iOS app using RealityView. The same 3D model is displayed successfully in the visionOS app. Everything works perfectly only when I set my project’s minimum deployment target to iOS 18.0.
However, my app’s minimum deployment target is iOS 15.0. When I use the RealityKitContent package to load the 3D model, it fails to compile and gives me the following error:
Compiling for iOS 15.0, but module 'RealityKitContent' has a minimum deployment target of iOS 18.0: /Users/Library/Developer/Xcode/DerivedData/RealityViewForiOS-cbfkgimsqngtuegqwvezusvscllf/Index.noindex/Build/Products/Debug-iphonesimulator/RealityKitContent.swiftmodule/arm64-apple-ios-simulator.swiftmodule
I have made the RealityKitContent package optional and tried importing using the following condition:
#if canImport(RealityKitContent)
import RealityKitContent
#endif
Despite this, it still fails to compile and produces the same error. I have not found a workaround for using the RealityKitContent package with app targets lower than iOS 18.0.
Here is my package definition:
let package = Package(
name: "RealityKitContent",
platforms: [
.visionOS(.v1),
.macOS(.v15),
.iOS(.v18)
],
products: [
.library(
name: "RealityKitContent",
targets: ["RealityKitContent"]),
],
dependencies: [],
targets: [
.target(
name: "RealityKitContent",
dependencies: []),
]
)
Here is the code I am using to load the 3D model with RealityView using the RealityKitContent package:
import SwiftUI
import RealityKit
#if canImport(RealityKitContent)
import RealityKitContent
#endif
struct ContentView: View {
var body: some View {
VStack {
if #available(iOS 18.0, *) {
RealityView { content in
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
if let scene = content.entities.first {
let uniformScale: Float = 3.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
} else {
// Fallback for earlier versions
}
}
}
}
#Preview {
ContentView()
}
Any help or guidance on how to use the RealityKitContent package for app targets lower than iOS 18.0 would be greatly appreciated.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In a listing program in WWDC24, it shows that users can control the robot to walk by pinching and sliding. However, I haven't found any documents or videos related to this function. If you know, please let me know. Thank you!
Hi, I love VideoMaterial API that gives so much power to play video on any mesh. But I am trying to play a side-by-side 3D video usingVideoMaterial:
RealityView { content in
let mesh = MeshResource.generatePlane(width: 300.0, height: 300.0, cornerRadius: 0) //generate mesh
let vidMaterial = VideoMaterial(avPlayer: AVPlayer(url: URL(string: "https://someurl/test/master.m3u8")!)) //VideoMaterial
vidMaterial.controller.preferredViewingMode = .stereo //<-- no idea why it doesn't work for SBS video in simulator
vidMaterial.avPlayer?.play()
let planeEntity = Entity() //new entity
planeEntity.components.set(ModelComponent(mesh: mesh, materials: [vidMaterial])) //set a new ModelComponent to the entity
content.add(planeEntity)
}
this code works well for plain 2D video playback but how do I display a Side-by-Side or Top-Bottom 3D video?
I found GeometrySwitchCameraIndex in custom ShaderGraphMaterial but if I use input node as a image texture then how do I pass the video frame as texture into my custom shader to achieve the 3D effect or maybe there is an even better way to deal with this?
There seems to be additional API .preferredViewingMode on the VideoMaterial's controller that can be set to .stereo but it doesn't give any stereo effect. Perhaps it's only for MV-HEVC media playback?
Is it possible to play a stereoscopic video in MV-HEVC format using a player embedded in an HTML page?
Topic:
Spatial Computing
SubTopic:
General
In Mixed Reality Mode there is strange issues with indirect pinches on objects.
If a user uses an indirect pinch to select an object and then walks around, or moves and re-orients their body while maintaining the pinch, the object moves as if there is some scalar being applied to it and it causes the object to behave in ways that are extremely counter-intuitive compared to other MR devices.
If a user indirect pinches on an object and then walks forward the object flies away from the user, faster than they are walking. If a user indirect pinches on an object and then walks backward, the object flies towards and eventually past the user, faster than they are walking. If a user indirect pinches an object and then turns around, the object rotates around some unknown position and with some added scalar resulting in very strange behavior.
Here are some examples of the issue in action. The first video is using Unity's Polyspatial SDK. The second video is using an entirely native stack of SwiftUI and RealityKit with NO Unity at all.
For some reason I am not allowed to link videos here from Drive or Gyazo, so I am including it in plaintext for now. If someone could direct me how I can upload video examples of what I am describing directly to these forums, I would appreciate it.
First Video Showing Issue in Unity with PolySpatial SDK:
https://i.gyazo.com/95788cf9d4587c167b544db031fbf412.mp4
Second Video Showing Issue in native only stack with RealityKit and Swift UI:
https://drive.google.com/file/d/1mgt8TXJiopbm6qdJw2rFG0geam0irnMn/view?usp=sharing
Unity Forum Bug Discussion which, after Investigation, Confirmed this issue is on the Native Platform:
https://discussions.unity.com/t/objects-do-not-behave-properly-when-manipulated-in-an-mr-space/1482439
For a Mixed Reality Environment, where a user may want to move around their space, while using Indirect Pinches to manipulate and "carry" objects with them this is a big issue.
Thank you
I am a student at Utah Valley University doing a UX Research project involving spatial web browsing on Safari. I am trying to determine if spatial video and photos would be supported on a safari web page while using the AVP.
I am not a developer, so my knowledge of that front is limited, but I am hoping to get any insight into if that feature would be able to be implemented into a web based experience. If so, what formats would need to be used? Is the MV-HEVC format able to be directly embedded? Or is there another format that needs to be explored?
Any insight is appreciated!
Hello,
I have a simple SwiftUI view that shows this bottom bar in the view and
I noticed that in SwiftUI previews the 2D window is squared off while in the simulator it has rounded edges. This effects the bottom bar because as you can see in the simulator the text is cut off. I am using Xcode 16 beta and visionOS 2 beta.
Why do the two windows look different? And I am surprised the text is getting cut off in the rounded window.
SwiftUI Preview:
Vision Pro Simulator
I'm having an issue with Group Activities and Spatial Templates in a fully immersive space.
Basically, my app switches between various immersive spaces and sets a SpatialTemplate based on the RealityView it enters. However, whenever a SpatialTemplate is set, it randomly makes the immersive space disappear without dismissing it. The Digital Crown has to be pressed to properly dismiss the immersive space, even though it's not visible.
I can get around this by toggling systemCoordinator.configuration.supportsGroupImmersiveSpace to false before entering and then waiting a couple of seconds before setting it to true. However, this doesn't seem like a great solution.
Another issue is that sometimes when entering a fully immersive space with an active Group Activity session, it flips the rotation of the RealityKit content. The immersiveSpaceDisplacement values are way off, so setting any offsets based on that is not an option. It seems like when the system is attempting to place the participants in the "appropriate" location, it doesn't understand the fully immersive environment at all. Granted, my RealityKit content is pretty complex, but I don't think it should flip the scene's y-axis upside down.
I was wondering if anyone else is experiencing these issues and has any workarounds?
Topic:
Spatial Computing
SubTopic:
General
Hello.
I am trying to calculate rays from the NDC coordinates of the screen and the inverse of the projection and view matrices provided by the VisionOS API. It works perfectly in the simulator, but on device the projected rays do not match the (correct) projection of the raster scene rendered with the same projection and view matrices.
Are there some differences between the device and simulator projection matrices that might cause this issue?
For visionOS, is there an API or function i can use to determine if the user has a Persona set for video calls? If not, is there any way i can attempt to determine "something" is available?
Topic:
Spatial Computing
SubTopic:
General
Hi all! I'm new to VisionOS development, so please excuse my inexperience.
I'm trying to run an XCode project generated by Unity Polyspatial (Unity 6 preview, Polyspatial 2.0.0.pre-9) on my Apple Vision Pro, which is running visionOS 2.0 (22N5286g). However, the device doesn't appear in XCode's list of Run destinations unless I lower the visionOS version number in the "minimum deployments" to below 2.0. Lowering to anything below 2 Makes the device appear as a run destination, but the build fails with errors that I assume are due to targeting a lower OS level.
Note that I have been able to successfully build and deploy to my device using a unity-generated Xcode project that only used visionOS 1 features (built off of Unity 2022.3.35f1) -- the issue appears to be specific to when I'm trying to use 2.0 features.
I'm sure I'm just missing something silly here -- why wouldn't the device appear as a valid run target for visionOS 2.0, when the device is decidedly running 2.0?
When I use the create a resources function to read the audio file in the immersive.usda file, it doesn’t work. The compiler reports that it cannot find the file in the project. I tried to get the URL of the immersive.usda file when creating a new default project, but I couldn’t retrieve it either. Why is that? Maybe I’m missing some configuration steps?
we are deveope an app on VisionOS IAP, we tried in app purchase code example on iphone, but seems not work, Xcode tell me:
purchase(options:) is unavailable in visionOS: use @Environment(.purchase) to get a PurchaseAction value to call. If your app uses UIKit, use purchase(confirmin:options:)."
1.Anybody know how to solve this and give us any help? and we already searched on tutorials and forum,seems no result. Thankyou very much!
I used such a gesture under a reality view.
DragGesture().targetedToAnyEntity()
.onChanged { value in
print("DragGesture")
self.dragOffset = value.translation
self.startTimer()
}
.onEnded { _ in
self.dragOffset = .zero
self.direction = "None"
self.stopTimer()
}
However, due to the special nature of Reality View, it is impossible to detect gestures normally, so I think some modifiers should be added after value.translation, but I don't know what modifiers are. Can you give me some? Do you know? Thank you.
Hello, I am running into a bug when I try to use a TextField in my SwiftUI project.
As soon as I click on the TextField to begin entering characters, this warning appears twice:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
followed by this warning:
Unable to simultaneously satisfy constraints.
Probably at least one of the constraints in the following list is one you don't want.
Try this:
(1) look at each constraint and try to figure out which you don't expect;
(2) find the code that added the unwanted constraint or constraints and fix it.
(
"<NSLayoutConstraint:0x600002241720 'accessoryView.bottom' _TtGC7SwiftUIP10$1c7cfcbc018InputAccessoryHostVS_P10$1c7cfcc5417InputAccessoryBar_:0x102973200.bottom == _UIRemoteKeyboardPlaceholderView:0x1038ef360.top + 86 (active)>",
"<NSLayoutConstraint:0x60000226d540 'inputView.top' V:[_TtGC7SwiftUIP10$1c7cfcbc018InputAccessoryHostVS_P10$1c7cfcc5417InputAccessoryBar_:0x102973200]-(0)-[_UIRemoteKeyboardPlaceholderView:0x1038ef360] (active)>"
)
Will attempt to recover by breaking constraint
<NSLayoutConstraint:0x60000226d540 'inputView.top' V:[_TtGC7SwiftUIP10$1c7cfcbc018InputAccessoryHostVS_P10$1c7cfcc5417InputAccessoryBar_:0x102973200]-(0)-[_UIRemoteKeyboardPlaceholderView:0x1038ef360] (active)>
Make a symbolic breakpoint at UIViewAlertForUnsatisfiableConstraints to catch this in the debugger.
The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in <UIKitCore/UIView.h> may also be helpful.
Type: Error | Timestamp: 2024-07-31 06:30:56.177554-04:00 | Process: SG-002-Tutorial1 | Library: UIKitCore | Subsystem: com.apple.UIKit | Category: LayoutConstraints | TID: 0xfcd38
This is then followed by this series of error messages and my application freezes.
Errors
To clarify, in my project's source, I am not setting any constraints, or converting coordinates between views (at least not knowingly).
I am going to attempt to reduce this to a simpler project which replicates the error, but I'd be thankful for any insights. I tried making a symbolic checkpoint as suggested in the warning, but this hit the breakpoint in a file of Assembly code I am not sure what to do with.
Does anyone know how to get apple raw eye-tracking sensor data from vision pro?
Topic:
Spatial Computing
SubTopic:
General
I was surprised to find the update to the Home button for the Control Center in the Beta 2 update to VisionOS. It essentially cripples my app (in review) because it hijacks looking at the palm and pinching, which is a very natural position for the hand to be in if, for instance, you're holding something in it, as is done in my app.
I can't imagine other apps will not want to do the same. I looked at a few others and noticed that in Blackbox, for instance, it hides the button at the beginning, but as soon as you do the gesture it comes back and there's no way to get rid of it again, so the gesture kicks you out of the app on subsequent uses.
I'd like this feedback to reach the Product Development team at Apple and am hoping this makes a difference in this feature moving forward. If nothing else, I'd like to see a way to disable it in my app.
If anyone else feels this way about this feature, please chime in so that we can get some eyes on it.
Regards.
I have been able to get object tracking working with vision OS 2. So now in my reality view, when my reference object is detected - I am overlaying digital content on top of the reference object. I am implementing this with a Transform entity and attaching an object anchor to the entity and then placing my digital content in the scene (inside Reality Composer Pro)
I now want to know if it's possible to create attachments and attach them to the digital content (say modelXYZ) that is spawned when the physical object is detected. If I need to write SwiftUI code to do this that works together with my RCP scene (that has the object tracking content), how do I do this? Some sample code or some reference to accomplish this would be extremely helpful
Hi there,
My app uses the .mixed immersion mode with an ImmersiveSpace rendering metal content into a compositor frame while also using Windows for SwiftUI content.
In the screenshot below, you can see a red outline rendered in Metal, note that that the SwiftUI content is always rendered on top, even though the depth of the plane is behind the depth of the metal content.
Is this behaving as expected or should I be hunting for a bug in my code?
Thank you!
I'm having trouble pairing my apple vision pro to my macbook pro M3, my macbook pro is on sonoma 14.6 and i have tested pairing a visionOS1.2 and 2.0 vision pro but it still doesn't work, i have a mac mini that pairs and connects fine to the headsets and those are the steps i tried to do on vision pro and macbook pro to pair them together until now but with no success :
On the same windows wifi hotspot
On the same iPhone hotspot
On an other wifi hotspot
Tried to clear remote devices, still not recognized
tried to turn off and turn on developper mode still nothing
tried to reset network parameters
tried to restart headset
tried to restart Xcode
tried to restart mac
just after restart the headset showed up and i clicked pair and typed in the code but then the headset was still in "disconnected" and couldn't connect to mac
tried to restart mac and headset
tried to rename headset
tried to switch mac
tried 1 headset on at a time
tried to clean build folder
deleted contents of ~/Library/Developer/Xcode/DerivedData
tried sudo defaults write "/Library/Preferences/com.apple.mDNSResponder.plist NoMulticastAdvertisements" -bool true
tried to deactivate the firewall