Asghar
Asghar3mo ago

Meta's Movement SDK with Normcore

Can somebody help in setting up a full body networked custom avatars with Movement SDK and Normcore. Movement SDK works locally but can't get to make it work with Normcore. I found only this paragraph on the Movement SDK's github. Hello, we can provide some advice for setting up Movement SDK for multiplayer. We suggest networking the final pose after retargeting and constraints, and optimizing the serialized pose before sending it. Additionally, consider implementing interpolation when applying the received pose to the receiving client. You can remove all other components except for the one responsible for deserializing the data on the player's side and simply update the character with the received pose. The pose data should be compressed and sent at a reasonable update rate, using networking optimization techniques. An article that explains some techniques can be found here: https://gafferongames.com/post/snapshot_compression/. For information about how to use Photon Fusion 2, please refer to their tutorials: https://doc.photonengine.com/fusion/current/tutorials/host-mode-basics/overview. To summarize, the host performs all logic to make the character move, while the client has only the skinned mesh that is driven by the result of the host's work. But I don't know how to get the final pose data and how to send it through Normcore (i guess custom Realtime components).
9 Replies
kapa.ai
kapa.ai3mo ago
Hi @Asghamer. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave: ---------------------- To set up a full body networked custom avatars with Movement SDK and Normcore, you would need to create a custom RealtimeComponent subclass to synchronize the final pose data. However, the specific details on how to get the final pose data from the Movement SDK are not provided in the knowledge sources. Here's a general guide on how to set up networked meta avatars with Normcore: 1. Install the Normcore Meta XR package. Make sure you've installed Normcore and have configured all of Meta's platform prerequisites. Once Normcore is set up in your project, you'll be able to install the Normcore Meta XR UPM package. source (https://normcore.io/documentation/guides/meta/meta-avatars-setup) 2. Create a Meta Avatar prefab from scratch. Create an empty Game Object in the scene with a MetaAvatar component on it. MetaAvatar includes a button that will create all of the components needed by the Meta Avatar SDK. source (https://normcore.io/documentation/guides/meta/meta-avatars-customization) 3. Set up Meta Avatars. Import the "Meta Avatars" sample that comes with the Normcore Meta XR UPM package. Add the "AvatarSDKManagerMeta" prefab to the scene. Add your App Key to the Realtime component and hit Play. source (https://normcore.io/documentation/guides/meta/meta-avatars-setup#set-up-meta-avatars) However, please note that Normcore uses metadata on the prefab in order to connect RealtimeComponents on all clients, so you cannot add RealtimeComponents at runtime. source (https://normcore.io/documentation/realtime/common-questions#can-i-add-realtimecomponents-at-runtime) For more advanced configuration and usage, you can check out the official Meta Avatar SDK documentation. source (https://normcore.io/documentation/guides/meta/meta-avatars-customization) If you're still encountering issues after checking these points, I recommend reaching out to the Normcore community on their Discord for further assistance. source (https://normcore.io/documentation/essentials/common-questions#does-normcore-support-x-y-z) Please note that the specific details on how to get the final pose data from the Movement SDK and how to send it through Normcore are not provided in the knowledge sources. You might need to refer to the Movement SDK's documentation or ask for help in their community. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
kapa.ai
kapa.ai3mo ago
<@455911545025724418> kapa.ai is still learning and improving, please let us know how it did by reacting below
Asghar
AsgharOP3mo ago
They are not meta avatars. I need to know how to get the final pose data or if there is any other way to do it. @Normcore Assistant This is a reply from a Movement SDK's developer.
Hello, like mentioned previously, only the host character should have logic to drive the character. The client character should not contain any logic other than applying the pose that it receives over the network, which means that components that drive locomotion (such as the PlayerController) shouldn't be present on the client character. The pose that should be sent from the host should be the pose after all updates to the skeleton are done. To get the final pose, take the position and rotation of all of the bone transforms in the skeleton (i.e. Hips, Spine, Chest, UpperChest etc.) after all modifications to the skeleton are done (end of late update).

The pose data should be compressed and sent at a reasonable update rate, using networking optimization techniques. An article that explains some techniques can be found here: https://gafferongames.com/post/snapshot_compression/.

To summarize, the host performs all logic to make the character move, while the client has only the skinned mesh that is driven by the result of the host's work.
Hello, like mentioned previously, only the host character should have logic to drive the character. The client character should not contain any logic other than applying the pose that it receives over the network, which means that components that drive locomotion (such as the PlayerController) shouldn't be present on the client character. The pose that should be sent from the host should be the pose after all updates to the skeleton are done. To get the final pose, take the position and rotation of all of the bone transforms in the skeleton (i.e. Hips, Spine, Chest, UpperChest etc.) after all modifications to the skeleton are done (end of late update).

The pose data should be compressed and sent at a reasonable update rate, using networking optimization techniques. An article that explains some techniques can be found here: https://gafferongames.com/post/snapshot_compression/.

To summarize, the host performs all logic to make the character move, while the client has only the skinned mesh that is driven by the result of the host's work.
Guide me on the compression, sending, receiving and the decompression part of the process. P.S I am using a custom character not Meta Avatar.
kapa.ai
kapa.ai3mo ago
Let me check the knowledge base for you. One moment! :books: ---------------------- To compress, send, receive, and decompress the pose data using Normcore, you would need to create a custom RealtimeComponent subclass to synchronize the pose data. Here's a general guide on how to do it: 1. Create a RealtimeModel: You need to create a RealtimeModel that represents the pose data you want to synchronize. This could be the position and rotation of all the bone transforms in the skeleton. Each field in the model should be a primitive type that can be easily serialized and transmitted to other clients. source (https://normcore.io/documentation/room/realtimemodel)
[RealtimeModel]
public partial class PoseModel {
[RealtimeProperty(1, true)] private Vector3 _hipPosition;
[RealtimeProperty(2, true)] private Quaternion _hipRotation;
// Add more properties for other bones...
}
[RealtimeModel]
public partial class PoseModel {
[RealtimeProperty(1, true)] private Vector3 _hipPosition;
[RealtimeProperty(2, true)] private Quaternion _hipRotation;
// Add more properties for other bones...
}
2. Create a RealtimeComponent: Create a RealtimeComponent subclass that uses the RealtimeModel you created. This component will be responsible for synchronizing the pose data between the model and the game object. source (https://normcore.io/documentation/realtime/realtimecomponent)
public class PoseSync : RealtimeComponent<PoseModel> {
// Update the model when the game object changes
private void LateUpdate() {
model.hipPosition = /* Get hip position from the skeleton */;
model.hipRotation = /* Get hip rotation from the skeleton */;
// Update other bones...
}

// Update the game object when the model changes
protected override void OnRealtimeModelReplaced(PoseModel previousModel, PoseModel currentModel) {
if (currentModel != null) {
/* Apply currentModel's pose data to the skeleton */;
}
}
}
public class PoseSync : RealtimeComponent<PoseModel> {
// Update the model when the game object changes
private void LateUpdate() {
model.hipPosition = /* Get hip position from the skeleton */;
model.hipRotation = /* Get hip rotation from the skeleton */;
// Update other bones...
}

// Update the game object when the model changes
protected override void OnRealtimeModelReplaced(PoseModel previousModel, PoseModel currentModel) {
if (currentModel != null) {
/* Apply currentModel's pose data to the skeleton */;
}
}
}
3. Compress and Decompress Data: The pose data should be compressed before sending and decompressed after receiving to save bandwidth. The specific method of compression depends on your needs. You might want to look into quantization, which is a common method for compressing 3D pose data. The article at https://gafferongames.com/post/snapshot_compression/ explains some techniques for snapshot compression. 4. Send and Receive Data: Normcore automatically synchronizes the RealtimeModel across all clients. When you update the model in the RealtimeComponent, Normcore will automatically send the changes to all other clients. When a client receives the changes, Normcore will automatically update the model, and the RealtimeComponent will apply the changes to the game object. source (https://normcore.io/documentation/realtime/realtimecomponent) Please note that this is a general guide and might need to be adjusted based on your specific needs and the details of the Movement SDK. If you encounter any issues, I recommend reaching out to the Normcore community on their Discord for further assistance. source (https://normcore.io/documentation/essentials/common-questions#does-normcore-support-x-y-z) ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
kapa.ai
kapa.ai3mo ago
<@455911545025724418> kapa.ai is still learning and improving, please let us know how it did by reacting below
Asghar
AsgharOP3mo ago
@Normcore Assistant does normcore support compression and decompression?
kapa.ai
kapa.ai3mo ago
Let me check the knowledge base for you. One moment! :books: ---------------------- Normcore is designed to be as bandwidth and CPU efficient as possible. However, the specific details on whether Normcore supports built-in data compression and decompression are not provided in the knowledge sources. source (https://normcore.io/documentation/room/common-questions#are-model-change-events-more-expensive-than-rpcs) For more advanced configuration and usage, you can check out the official Normcore documentation. source (https://normcore.io/documentation/essentials/what-is-normcore) If you're still encountering issues after checking these points, I recommend reaching out to the Normcore community on their Discord for further assistance. source (https://normcore.io/documentation/essentials/common-questions#does-normcore-support-x-y-z) ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
kapa.ai
kapa.ai3mo ago
<@455911545025724418> kapa.ai is still learning and improving, please let us know how it did by reacting below
Asghar
AsgharOP3mo ago
Normcore wasnt able to handle all the bones data. Implemented it with Fusion 2 Successfully.

Did you find this page helpful?