When is it safe to retrieve the AudioSource from a peer?
I need a reference to the AudioSource (the one that is created by the RealtimeAvatarVoice component). I was retrieving it on an onchange for one of the properties of my model, but it seems like it still doesn't exist by then. Is there an event for when it's created, or should I poll for it?
26 Replies
Hi @rogan2528. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave:
----------------------
The knowledge sources do not explicitly mention a dedicated event for when the AudioSource is created by the RealtimeAvatarVoice component. However, they do clarify that RealtimeAvatarVoice will create an AudioSource at runtime if one is not already present, and that this happens when the component is owned by a remote client (i.e., when it needs to play back audio) Realtime XR Voice Chat.
Since there is no documented event for AudioSource creation, polling for the AudioSource reference after the RealtimeAvatarVoice component is initialized is a practical approach. You could, for example, check for the existence of the AudioSource in Update or a coroutine until it becomes available.
If you want to configure the AudioSource in advance, you can add your own AudioSource component to the prefab in the editor; RealtimeAvatarVoice will detect and use it instead of creating a new one. This can help ensure the AudioSource exists as soon as the component is initialized Voice Chat Common Questions.
Summary:
- There is no built-in event for AudioSource creation in RealtimeAvatarVoice. - Polling for the AudioSource is a valid approach. - Alternatively, add an AudioSource to the prefab to guarantee its presence when needed. If you need more precise control, you may consider forking or modifying the RealtimeAvatarVoice script, as the source code is included with Normcore Realtime XR Voice Chat. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
- There is no built-in event for AudioSource creation in RealtimeAvatarVoice. - Polling for the AudioSource is a valid approach. - Alternatively, add an AudioSource to the prefab to guarantee its presence when needed. If you need more precise control, you may consider forking or modifying the RealtimeAvatarVoice script, as the source code is included with Normcore Realtime XR Voice Chat. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
<@214044513256210432> Kapa.ai is still learning and improving, please let me know how I did by reacting below.
@Normcore Assistant Is the process of getting lipsync for Meta Avatars working just a case of adding the OvrAvatarLipSyncContext to the avatar's head and setting SetLipSync on the avatar entity to that OvrAvatarLipSyncContext, or is there more we have to do to make sure the audio gets to the OvrAvatarLipSyncContext component?
Let me check the knowledge base for you again. One moment! :books:
----------------------
Based on the available knowledge sources, the process for enabling lipsync with Meta Avatars in Normcore involves using the MetaAvatarLipSync component, which is set up automatically when you use the "Set Up Meta Avatar" button on the MetaAvatar component. This setup references the RealtimeAvatarVoice component for voice data, not the OvrAvatarLipSyncContext directly. The documentation does not mention the need to manually add or configure the OvrAvatarLipSyncContext or call SetLipSync on the avatar entity.
When you use the provided workflow (adding the MetaAvatar component and running its setup), the system will:
- Add the MetaAvatarLipSync component and wire it to RealtimeAvatarVoice.
- Handle the connection between voice chat audio and the avatar's lipsync automatically.
If you follow the documented setup, you should not need to manually manage the OvrAvatarLipSyncContext or route audio yourself. The integration is designed to work out of the box with the Normcore Meta XR package and its prefabs Meta Avatars Customization.
If you are building a custom solution outside of the documented workflow, the knowledge sources do not provide enough detail about manual integration with OvrAvatarLipSyncContext or SetLipSync. For advanced or custom setups, you may need to consult the official Meta Avatar SDK documentation for further details.
Summary:
If you use the Normcore Meta XR workflow and prefabs, lipsync is handled for you and you do not need to manually set up OvrAvatarLipSyncContext or SetLipSync. The knowledge sources do not provide details for manual integration beyond this workflow Meta Avatars Customization. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
If you use the Normcore Meta XR workflow and prefabs, lipsync is handled for you and you do not need to manually set up OvrAvatarLipSyncContext or SetLipSync. The knowledge sources do not provide details for manual integration beyond this workflow Meta Avatars Customization. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
<@214044513256210432> Kapa.ai is still learning and improving, please let me know how I did by reacting below.
If it’s for RealtimeAvatarVoice, you can also just create it on your player prefab and RealtimeAvatarVoice will find it and use it
That's for the AudioSource? Thanks, I got that part working. Now I'm dealing with the Meta lipsync component isn't getting anything from the AudioSource and I'm wondering if I have to override the RealtimeAvatarVoice component so I can call metaLipSyncContext.ProcessAudioSamples(_microphoneFrameData, 1); because it seems the audio data isn't making it to the meta lipsync component
Are you using Normal’s meta avatars component? we do all of this for you and avoid having to fetch audio from the audio source directly
Unfortunately it's a project I'm migrating from another networking architecture and I don't want to have to recreate all my meta avatar logic, so I'm trying to get it working with my existing architecture
What I’d recommend then is to create a fresh project, import our meta avatars component, and check out the source code. I’d look at how we implement lip sync support and copy just that piece into your main project
Good idea. Thanks!
it uses a simpler approach that gets the audio samples from RealtimeAvatarVoice directly and bypasses the need to fetch samples from the audio source
sure thing!
@Normcore Assistant The voiceData event of my RealtimeAvatarVoice component is never called, though I can hear the audio from the peer just fine and I can see it's going through the AudioSource component fine and I can see the meters changing with the audio. What could be wrong in my setup?
Let me check the knowledge base for you again. One moment! :books:
----------------------
I'm sorry, but the knowledge sources do not contain specific information about the voiceData event of the RealtimeAvatarVoice component, its expected behavior, or troubleshooting steps if it is not being called. The documentation does confirm that RealtimeAvatarVoice handles audio streaming and playback, and that you can access the audio via the AudioSource component, but it does not mention the voiceData event or provide details about its usage or requirements Realtime XR Voice Chat.
If you are hearing audio and seeing the meters respond, it suggests that the audio pipeline is functioning correctly, but without more information in the documentation, I cannot provide a definitive answer as to why the voiceData event is not firing.
For further assistance, you may want to reach out to the Normcore community or support channels.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
<@214044513256210432> Kapa.ai is still learning and improving, please let me know how I did by reacting below.
@Max I copied over the MetaAvatarLipSync component from the Normcore Meta XR library, and it looks straightforward, but it relies on the voiceData event being called, which is not called for me.
Are you on the latest version of Normcore? / if you've forked RealtimeAvatarVoice make sure to pull in the latest updates
also wait I think maybe it uses audioData for the local player, and another API for remote players? audioData only fires for the local player's microphone data
I see. I didn't see any other component to use for remote avatars. I'll have to dig deeper. Thanks
one other question. in your project, do you synchronize the meta avatar raw buffers or do you only sync head/hands and then on all clients update Meta Avatars to match your head/hand positions on each client?
We do the former, and it's very possible that raw buffer includes lip sync as well.
I see. I don't sync the raw buffers
we've found meta syncs tons of stuff through it. head + hand positions, avatar customizations, finger placement, etc
That would explain it
ok here's what I think you'll want to do then
Manually add Normcore's AudioOutput component next to RealtimeAvatarVoice. Then add a custom MonoBehaviour after it that uses OnAudioFilterRead to get the audio data out.
It's important it comes afterward otherwise you won't get any of the voice samples.
RealtimeAvatarVoice will detect your AudioOutput instance that you've added manually and use that one instead of creating a new one*
but definitely make sure you're on the latest Normcore as we just added that functionality very recently
Awesome, thanks so much Max
sure thing!
That worked. Thanks so much!