Struggles with attaching a ModelEntity to the skeleton joints of another ModelEntity

In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm.

The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit.

I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit

Will this be addressed in some way?

Hello @AlphaHalcyonInteractive , thank you for your question!

I recommend downloading our BOT-anist sample, and take a look at how the head and backpack are attached to the body.

In RealityKit, the skeleton joint positions are not separate entities with transform components, as you might expect. This means you don't interact with them by placing other Entities as descendants of the skeleton joints. Instead, to get the position of a joint for the purpose of attaching another Entity to it, you calculate its transform by multiplying a chain of joint transforms together each frame.

There are a few steps to this, first you will need to get the indices for the chain of joints starting from the root joint that connect to your "head" or "backpack" or "arm_cannon" or whatever joint you're looking for. In BOT-anist, see the function getJointHierarchy in RobotCharacter.swift for a code implementation. Then, you will need to set the pin on the skeleton, which occurs on lines 92 and 93 in that same file, which will return an entity with a position you can use as an offset later. Finally, the code for setting the position each frame exists in JointPinSystem.swift in the function named pinEntity. This contains the relevant code for multiplying your chain of joint transforms together and setting the position of the pinned entity.

This isn't as straight forward as placing a ModelEntity as a descendent of a joint entity, so I recommend detailing your use case and submitting your feedback using Feedback Assistant.

Let me know if you have any more questions!

Thanks for your response!

The other day, I tried to chain together the translations from each joint transform and noticed using multiple different rigs that the translation sum for a peripheral joint like the wrist still comes out to some number very close to zero.

I will try using matrix multiplication as it is demonstrated in pinEntity and see if I have more success with that. I can write something up for Feedback Assistant as well, as it would be cool to have work out of the box.

I have another question about jointTransforms and AnimationResources if you're up for it:

I am trying to play two separate pose-based joint animations at once and was hoping to restrict each animation to only the joint names and transforms I want animated (in the constructor of FromToByAnimation and then using its fromValue and toValue properties).

Long story short, I notice that when excluding upper body joints from an animation, they still seem to experience scale and rotation animations for the first keyframe or so.

It almost seems like the first joint in the chain that is excluded from the animation experiences a zeroed out Transform and before returning to its base resting transform. Here is a code sample for retrieving joint transforms from a map of joints and orientations:

func transformsForPose(_ pose: Pose, exclude excludedBones: [Bone]) -> [(String, Transform)] {
    var updatedTransforms = restPose // initial model.jointTransforms
    let excludedBoneNames = excludedBones.map({ $0.rawValue })
    for bone in pose.keys {
        guard let boneIndex = getBoneIndex(bone), let boneRotation = pose[bone] else {
            continue
        }
        updatedTransforms[boneIndex].rotation = boneRotation
    }
    return updatedTransforms.indices.compactMap({
        let name = model.jointNames[$0]
        let transform = updatedTransforms[$0]
        let suffix = name.getSuffix(separator: "/").lowercased()
        if excludedBoneNames.contains(suffix) {
            return nil
        } else {
            return (name, transform)
        }
    })
}

And then creating an animation from those joint names and transforms:

static func animationForPoseIndex(_ index: Int,
                                  poses: [Pose], exclude: [Bone],
                                  durations: [TimeInterval],
                                  for characterEntity: CharacterEntity,
                                  named name: String) -> AnimationResource? {
    let pose = poses[index]
    let fromTransforms = characterEntity.transformsForPose(index > 0 ? poses[index-1] : poses[poses.count-1], exclude: exclude)

    let duration = durations[index]
    var animation = FromToByAnimation(jointNames: fromTransforms.map({ $0.0 }))
    animation.name = name + index.description
    animation.fromValue = JointTransforms(fromTransforms.map( {$0.1} ))
    
    let toTransforms = characterEntity.transformsForPose(pose, exclude: exclude)
    animation.toValue = JointTransforms(toTransforms.map( {$0.1} ))
    animation.bindTarget = .jointTransforms
    animation.blendLayer = 100
    animation.duration = duration
    animation.isRotationAnimated = true
    animation.fillMode = .forwards
    return try? AnimationResource.generate(with: animation)
}

When I don't exclude any joints (i.e include every joint name and transform) and use a rest pose to fill in the remaining transforms for the joints that won't be active, I don't see the issue. I am wondering if I am missing something obvious or if there is a bug here?

To follow up on this, I discovered the problem was further downstream from here, calling playAnimation with a transitionDuration of 0.0 seconds solved the second issue described in this post

Will update with the results of trying the pinEntity approach for the first issue

Is there any precedent for a GeometricPin not getting properly attached to a rig despite being passed the correct joint name? I've tried setting a pin on the hand joint of several Blender-exported rigs using both the full joint name and just the suffix, and no matter which joint name I use, the position for the pin is coming back nil, which the docs suggest is a failure to map to an existing joint name.

When trying to set the position of an equipped object directly, the chain matrix multiplication logic from pinEntity provides transforms with odd positioning with respect to the hand joint.

I noticed that while this was off, it was only a transformation away from having the orientation match. I had to rotate the anchored entity over the z-axis to get the object onto the proper side of the model, and then its orientation matches the orientation of the hand but even after rolling the transform of the anchor entity for the object, it is aways some incalculable offset away from where it should be:

It's close-ish but it's not quite right and I'm not able to retrieve any offset information from failed GeometricPin:

Struggles with attaching a ModelEntity to the skeleton joints of another ModelEntity
 
 
Q