Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away.
The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors.
Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Issues building Unity plug-in project: Cannot locate native library Apple.Core/Apple.GameKit for iOS
I'm having issues getting a well built package from the Apple Unity Plug-in project.
When building the my game project in Unity the following error is printed to the console:
Apple.Core.AppleNativeLibraryUtility] Cannot locate a Debug or Release Apple.Core native library for iOS.
Please ensure that the build invocation (build.py, xcodebuild, or Xcode) compiled cleanly and that the build was configured to support Debug on iOS.
As far as I can tell the build did compile cleanly, but I might be missing something.
If anyone can see what I'm doing wrong or has any insight it would be greatly appreciated.
Setup is the following:
macOS Tahoe 26 Beta
Xcode-beta Version 26.0 beta 3 (17A5276g)
Unity Plug-in branch: 2025-beta1
Unity game project version: 2022.3.60f
M1 Macbook Pro
The built packages have been imported into the game project through the Unity Package Manager using the tarball option pointing to the built packages from the Unity Plug-in project.
The Unity Plug-in project has been built using the build.py file with the following:
python3 build.py -m iOS iPhoneSimulator -p Core GameKit CoreHaptics GameController -k all
The output is available in the attached file.
build-output.txt
Here's an image of the NativeLibraries~ folder inside the built Apple.Core package.
I have a Core Image filter in my app that uses Metal. I cannot compile it because it complains that the executable tool metal is not available, but I have installed it in Xcode.
If I go to the "Components" section of Xcode Settings, it shows it as downloaded. And if I run the suggested command, it also shows it as installed. Any advice?
Xcode Version
Version 26.0 beta (17A5241e)
Build Output
Showing All Errors Only
Build target Lessons of project StudyJapanese with configuration Light
RuleScriptExecution /Users/chris/Library/Developer/Xcode/DerivedData/StudyJapanese-glbneyedpsgxhscqueifpekwaofk/Build/Intermediates.noindex/StudyJapanese.build/Light-iphonesimulator/Lessons.build/DerivedSources/OtsuThresholdKernel.ci.air /Users/chris/Code/SerpentiSei/Shared/iOS/CoreImage/OtsuThresholdKernel.ci.metal normal undefined_arch (in target 'Lessons' from project 'StudyJapanese')
cd /Users/chris/Code/SerpentiSei/StudyJapanese
/bin/sh -c xcrun\ metal\ -w\ -c\ -fcikernel\ \"\$\{INPUT_FILE_PATH\}\"\ -o\ \"\$\{SCRIPT_OUTPUT_FILE_0\}\"'
'
error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
/Users/chris/Code/SerpentiSei/StudyJapanese/error:1:1: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
Build failed 6/9/25, 8:31 PM 27.1 seconds
Result of xcodebuild -downloadComponent MetalToolchain (after switching Xcode-beta.app with xcode-select)
xcodebuild -downloadComponent MetalToolchain
Beginning asset download...
Downloaded asset to: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/4d77809b60771042e514cfcf39662c6d1c195f7d.asset/AssetData/Restore/022-19457-035.dmg
Done downloading: Metal Toolchain (17A5241c).
Screenshots from Xcode
Result of "Copy Information"
Metal Toolchain 26.0 [com.apple.MobileAsset.MetalToolchain: 17.0 (17A5241c)] (Installed)
I've had no issue calling image files in my .swift files, but they are causing crashes when used in my .SKS files. When I set a sprite texture to an image in the inspector/ editor bar, at runtime when that sprite is being called I get the error: "Cannot get value with size 16. The type encoded as {CGRect={CGPoint=dd}{CGSize=dd}} is expected to be 32 bytes." From my research it has something to do with Apple switching from 32 to 64 bite machines. From chatGPT “SpriteKit under the hood uses NSKeyedUnarchiver to load your .sks file. That unarchiver decodes each archived property by reading a fixed‑size blob of bytes and mapping it into a C struct. In your case it ran into a mismatch”. I am using a 64-bite machine to write my code and 64-bite simulators and physical devices, so there isn't a clear cause of the mismatch. My scenes play fine in Xcode 16's preview window and my code builds, it just crashes at runtime.
When I don’t use image textured assets in the SKS file it works fine. It loads animated labels, and plain color squares. I’ve been able to work around this for static things like a sprite with a background texture by. in a normal non-game swift file, writing code like:
if let scene = SKScene(fileNamed: "GameScene2") {
let bg = SKSpriteNode(imageNamed: "YourBackgroundImage")
bg.position = CGPoint(x: scene.frame.midX, y: scene.frame.midY)
bg.zPosition = -1
scene.addChild(bg)
}
The issue now is I want to make a particle emitter and other non static sprites, but my understanding of their properities isn’t deep enough to create them without the editor. Also when I set SKTexture in a swift file that causes the same runtime crash with the 16/32 error. Could you help me figure out how to fix the bug so I can use the editor again? Otherwise could you help me figure out how to write a workaround like I do for background images? I have a feeling the answer is in writing my own NSKeyedUnarchiver but I don’t know how to make sure it’s called instead of the default one. I've already tried cleaning my code multiple times and deleting and reading sprite nodes. Thank you.
I'm a newbee at Vulkan and Xcode.
I have my project on github https://github.com/flocela/OrangeSpider/
Whenever I run, two windows open instead of only one.
I added testing, which means I have an OrangeSpider.xctestplan in the OrangeSpider/TestsOrangeSpider/ folder.
This is my first time adding testing to an XCode project, so I think this may be where the problem is.
I also get this error message:
ViewBridge to RemoteViewService Terminated: Error Domain=com.apple.ViewBridge Code=18 "(null)" UserInfo={com.apple.ViewBridge.error.hint=this process disconnected remote view controller -- benign unless unexpected, com.apple.ViewBridge.error.description=NSViewBridgeErrorCanceled}
Topic:
Graphics & Games
SubTopic:
Metal
Hi everyone,
I'm building a native iOS app using Unreal Engine 5.6 with Firebase for authentication and Firestore. The app uses a MetaHuman avatar and is meant to run as a standalone UE app on iPhone.
I'm using this Firebase wrapper:
👉 https://pandoa.github.io/FirebaseFeatures/
I've followed all the steps, including:
Adding GoogleService-Info.plist to the Xcode project and ensuring it’s in the correct target
Calling FIRApp.configure() in AppDelegate
Verifying the plist is bundled correctly
However, the app crashes on launch, and Firebase does not initialize properly.
Crash log shows:
[FirebaseCore][I-COR000005] No app has been configured yet.
Setup details:
Unreal Engine: 5.6 (source build, macOS)
iOS Deployment: 17.5
MetaHuman character packaged correctly and app launches fine without Firebase
Has anyone here managed to get Firebase working inside a native Unreal Engine iOS app with this setup? I'd love to hear if there’s something I’m missing — maybe something with initialization timing or module loading?
Thanks so much in advance 🙏
Topic:
Graphics & Games
SubTopic:
General
Hello everyone,
I must have missed something but why isn't there a depthAttachmentPixelFormat to the new Metal 4 MTL4RenderPipelineDescriptor, unlike the old MTLRenderPipelineDescriptor?
So how do you set the depth pixel format?
Thanks in advance!
Hi I have attempted to find a fix for my issue via documentation online and one phone support ( not code level support ) call to no end. I could continue to try various things but would like to see if someone else has encountered this issue and a fix for it.
Background: My Game app is live on App Store and has 1 classic leaderboard . I am now getting ready to submit an update to the app and it also entails adding a new recurring leaderboard. I added the leaderboard in App Store. I however have NOT uploaded my new build yet. I have also not added my leaderboards ( currently live and not live ) to any set.
When I try to submit scores using
GKLeaderboard.submitScore(_:context:player:leaderboardIDs:completionHandler:) to the new non-live leaderboard it works ( gives me no error )
When I try to load the scores from the new non-live leaderboard
GKLeaderboard.loadLeaderboards(IDs:completionHandler:)
loadEntries(for:timeScope:range:completionHandler:)
it fails. Error: "leaderboardID not found"
I could try ( and will )
uploading the new build to AppStore connect and associating the new leaderboard to it before testing again.
try associating each leaderboard to a set
Is there anything else that I should be aware of ?
Thanks in advance
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not.
I already checked this post, which points to this seemingly outdated sample code
I assume that example is deprecated in favour of the now built in version of GestureComponent.
Nonetheless, there are no compiler warnings or errors, it just fails silently.
TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight.
RealityView { content in
let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1)))
testEntity.position = SIMD3<Float>(0,0,-1)
testEntity.components.set(InputTargetComponent())
testEntity.components.set(CollisionComponent(
shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))]
))
let testGesture = TapGesture()
.onEnded { value in
print("Tapped")
}
testEntity.components.set(GestureComponent(testGesture))
let dragGesture = DragGesture()
.onEnded { value in
print("Dragged")
}
testEntity.components.set(GestureComponent(dragGesture))
content.add(testEntity)
}
Hi everyone,
I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender).
The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible.
The exact same blend shape animation works perfectly when no animation is playing.
In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit
Still, as soon as an animation starts, the shape keys don’t animate anymore.
Here’s the test project on GitHub that demonstrates the issue clearly:
https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample
The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing.
Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates?
Thanks in advance for any insights.
Hello everyone,
I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender.
My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit.
The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis.
When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context.
My questions are:
What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world?
Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach?
Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated!
Thank you.
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS
Hello RealityKit developers,
I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app.
In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead.
The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom.
My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit.
If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle?
Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion?
Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful!
The following code is modified from the sample.
extension MainView {
func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws {
let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1])
let attachmentPin = attachmentEntity.pins.set(
named: "attachment_hinge",
position: .zero,
orientation: hingeOrientation
)
let relativeJointLocation = attachmentEntity.position(
relativeTo: ballEntity
)
let ballPin = ballEntity.pins.set(
named: "ball_hinge",
position: relativeJointLocation,
orientation: hingeOrientation
)
// Create a PhysicsSphericalJoint between the two pins.
let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin)
try revoluteJoint.addToSimulation()
}
}
The following image is a screenshot of the operation when changing to PhysicsSphericalJoint.
Thank you in advance for your assistance.
Hi,
What's the best way to handle drastic changes in scene charateristics with the new MTLFXTemporalDenoisedScaler?
Let's say a visible object of the scene radically changes its material properties. I can modify the albedo and roughness textures consequently. But I suspect the history will be corrupted. Blending visual information between the new frame and the previous ones might be a nonsense.
I guess the problem should be the same when objects appear or disappear instantly.
Is the upsacler manage these events for us (by lowering blending), or should we use the reactive or the denoise strength mask or something like that to handle them?
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm.
The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit.
I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked
https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit
Will this be addressed in some way?
Apple, please bring back SceneKit.
Breaking Through PolySpatial's ~8k Object Limit – Seeking Alternative Approaches for Large-Scale Digital Twins
Confirmed: PolySpatial make Doubles MeshFilter Count – Hard Limit at ~8k Active Objects (15.9k Total)
Project Context & Research Goals
I’m developing an industrial digital twin application for Apple Vision Pro using Unity’s PolySpatial framework (RealityKit rendering in Unbounded_Volume mode). The scene contains complex factory environments with:
Production line equipment Many fragmented grid objects need to be merged.)
Dynamic product racks (state-switchable assets)
Animated worker avatars
To optimize performance, I’m systematically testing visionOS’s rendering capacity limits. Through controlled stress tests, I’ve identified a critical threshold:
Key Finding
When the total MeshFilter count reaches 15,970 (system baseline + 7,985 user-created objects × 2 due to PolySpatial cloning), the application crashes consistently. This suggests:
PolySpatial’s mirroring mechanism effectively doubles GameObject overhead
An apparent hard limit exists around ~8k active mesh objects in practice
Objectives for This Discussion
Verify if others have encountered similar limits with PolySpatial/RealityKit
Understand whether this is a:
Memory constraint (per-app allocation)
Render pipeline limit (Metal draw calls)
Unity-specific PolySpatial behavior
Explore optimization strategies beyond brute-force object reduction
Why This Matters
Industrial metaverse applications require rendering thousands of interactive objects . Confirming these limits will help our team:
Design safer content guidelines
Prioritize GPU instancing/LOD investments
Potentially contribute back to PolySpatial’s optimization
I’d appreciate insights from engineers who’ve:
Pushed similar large-scale scenes in visionOS
Worked around PolySpatial’s cloning overhead
Discovered alternative capacity limits (vertices/draw calls)
TL;DR: RealityKit and Reality Composer Pro aren't forward or backward compatible with each other, and the resulting error message is terse and unhelpful. (FB14828873)
So far, I've been sticking with Xcode 16.4 for development and only using Xcode 26.0 beta experimentally.
Yesterday, I used xcode-select to switch to Xcode 26.0 beta 3 to test it, but I forgot to switch back.
Consequently, this morning I unintentionally used the future Reality Composer Pro (the version included with Xcode 26) to make a small change to a USD file.
Now I realize that if I'm unlucky, it's possible Reality Composer Pro may have silently introduced a small change into the USD file that may make RealityKit fail to read the file on iOS 18 and visionOS 2, which in the past has resulted in hours of debugging to track down the source of the failure, often a single line in the USD file that RealityKit can't communicate to me other than with the error "the operation couldn't be completed".
As an analogy, this situation is as if, during regular development (not involving Reality Composer Pro), Xcode didn't warn you about specific API version conflicts, but instead failed with a generic error message, without highlighting the line in your Swift file that was the source of the error.
I am using Unreal Engine 5.6 on a MacBook Pro with an M3 chip and macOS 15.5. I’ve installed Xcode and accepted the license, but Unreal is not detecting the latest Metal Shader Standard (Metal v3.0). The maximum version Unreal sees is Metal v2.4, even though the hardware and OS should support Metal 3.0. I’ve also run sudo xcode-select -s /Applications/Xcode.app and accepted the license via Terminal. Is there anything in Xcode settings, SDK availability, or system permissions that could be preventing access to Metal 3.0 features?"
Hi ,
My application meet below crash backtrace at very low repro rate from the public users, i do not see it relate to a specific iOS version or iPhone model. The last code line from my application is calling CAMetalLayer nextDrawable API.
I did some basic studying, suppose it may relate to the wrong CAMetaLayer configuration, like
frame property w or h <= 0.0
bounds property w or h <= 0.0
drawableSize w or h <= 0.0 or w or h > max value (like 16384)
Not sure my above thinking is right or not? Will the UIView which my CAMetaLayer attached will cause such nextDrawable crash or not ?
Thanks a lot
Main Thread - Crashed
libsystem_kernel.dylib
__pthread_kill
libsystem_c.dylib
abort
libsystem_c.dylib
__assert_rtn
Metal
MTLReportFailure.cold.1
Metal
MTLReportFailure
Metal
_MTLMessageContextEnd
Metal
-[MTLTextureDescriptorInternal validateWithDevice:]
AGXMetalA13
0x245b1a000 + 4522096
QuartzCore
allocate_drawable_texture(id<MTLDevice>, __IOSurface*, unsigned int, unsigned int, MTLPixelFormat, unsigned long long, CAMetalLayerRotation, bool, NSString*, unsigned long)
QuartzCore
get_unused_drawable(_CAMetalLayerPrivate*, CAMetalLayerRotation, bool, bool)
QuartzCore
CAMetalLayerPrivateNextDrawableLocked(CAMetalLayer*, CAMetalDrawable**, unsigned long*)
QuartzCore
-[CAMetalLayer nextDrawable]
SpaceApp
-[MetalRender renderFrame:] MetalRenderer.mm:167
SpaceApp
-[FrameBuffer acceptFrame:] VideoRender.mm:173
QuartzCore
CA::Display::DisplayLinkItem::dispatch_(CA::SignPost::Interval<(CA::SignPost::CAEventCode)835322056>&)
QuartzCore
CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long)
QuartzCore
CA::Display::DisplayLink::dispatch_deferred_display_links(unsigned int)
UIKitCore
_UIUpdateSequenceRun
UIKitCore
schedulerStepScheduledMainSection
UIKitCore
runloopSourceCallback
CoreFoundation
__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__
CoreFoundation
__CFRunLoopDoSource0
CoreFoundation
__CFRunLoopDoSources0
CoreFoundation
__CFRunLoopRun
CoreFoundation
CFRunLoopRunSpecific
GraphicsServices
GSEventRunModal
UIKitCore
-[UIApplication _run]
UIKitCore
UIApplicationMain
What is Game Mode?
Game Mode optimizes your gaming experience by giving your game the highest priority access to your CPU and GPU, lowering usage for background tasks. And it doubles the Bluetooth sampling rate, which reduces input latency and audio latency for wireless accessories like game controllers and AirPods.
See Use Game Mode on Mac
See Port advanced games to Apple platforms
How can I enable Game Mode in my game?
Add the Supports Game Mode property (GCSupportsGameMode) to your game’s Info.plist and set to true
Correctly identify your game’s Application Category with LSApplicationCategoryType (also Info.plist)
Note:
Enabling Game Mode makes your game eligible but is not a guarantee; the OS decides if it is ok to enable Game Mode at runtime
An app that enables Game Mode but isn’t a game will be rejected by App Review.
How can I disable Game Mode?
Set GCSupportsGameMode to false.
Note: On Mac Game Mode is automatically disabled if the game isn’t running full screen.