How can I distort the mic audio in RealtimeAvatarVoice to be higher or lower pitch?
Is it possible to distort the mic audio from RealtimeAvatarVoice so the player is heard with a squeaky high pitched voice or a deep low pitched voice?
19 Replies
Hi @dontworry. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave:
----------------------
Hmm, I don't know enough to give you a confident answer yet. The documentation provided does not mention the ability to modify the pitch of the audio in RealtimeAvatarVoice directly. However, it does mention that RealtimeAvatarVoice uses Unity's audio engine for playback, and Unity does have the ability to modify audio pitch.
You might be able to achieve this by adding an audio effect to the AudioSource component that RealtimeAvatarVoice uses for playback. Unity's audio engine supports a variety of audio effects, including pitch shifting.
Please refer to the Unity documentation (https://docs.unity3d.com/Manual/AudioEffects.html) for more information on how to use audio effects.
For more specific information on how to modify the RealtimeAvatarVoice component, you may want to refer to the NormalVR documentation (https://normcore.io/documentation/realtime/xr/voicechat#audiopreprocessor).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
<@666359106176679966> kapa.ai is still learning and improving, please let us know how it did by reacting below
@Normcore Assistant the unity documentation link to audio effects goes to a blank page, it might be outdated
Let me check the knowledge base for you. One moment! :books:
----------------------
I apologize for the inconvenience. It seems the link provided is outdated or incorrect. Unfortunately, the provided knowledge sources do not contain a valid link to the Unity documentation for audio effects.
For accurate and up-to-date information, I recommend visiting the official Unity documentation website and searching for "Audio Effects". This should provide you with the necessary resources to understand how to apply and manipulate audio effects in Unity, including pitch shifting.
For more specific information on how to modify the RealtimeAvatarVoice component, you may want to refer to the NormalVR documentation (https://normcore.io/documentation/realtime/xr/voicechat#audiopreprocessor).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
<@666359106176679966> kapa.ai is still learning and improving, please let us know how it did by reacting below
@Normcore Assistant where can I find the AudioSource component that RealtimeAvatarVoice uses for playback? I don't see any AudioSource in the script
Let me check the knowledge base for you. One moment! :books:
----------------------
The AudioSource component that RealtimeAvatarVoice uses for playback is not directly visible in the script. However, according to the NormalVR documentation (https://normcore.io/documentation/realtime/common-questions#how-do-i-spatialize-voice-chat), RealtimeAvatarVoice will create an AudioSource if one is not present.
If you want to configure the AudioSource, you can add your own AudioSource component to the same GameObject where RealtimeAvatarVoice is attached. RealtimeAvatarVoice will detect and use your AudioSource instance instead of creating a new one.
Here's how you can do it:
1. Select the GameObject with the RealtimeAvatarVoice component in the Unity editor.
2. Click on "Add Component" in the Inspector.
3. Search for "Audio Source" and add it to the GameObject.
Now, RealtimeAvatarVoice will use this AudioSource for audio playback, and you can configure it as needed.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
<@666359106176679966> kapa.ai is still learning and improving, please let us know how it did by reacting below
add an audio source to the avatar prefab and apply whatever effect you want to that
@kapa.ai @Mechabit
I added the Audio Source to the avatar prefab /Head gameobject that has the RealtimeAvatarVoice component.
In runtime I have 2 avatars created in the scene and they both use the same avatar prefab.
The 1st one keeps the Pitch at 3 that I set in the inspector, but AudioClip has "None" in its field.
The 2nd one resets the Pitch to 1, and "Normcore Audio Stream" is in the AudioClip field.
So it seems like the 1st one is keeping my Audio Source settings but not getting used by RealtimeAvatarVoice, while the 2nd one is getting used by RealtimeAvatarVoice but ignoring the Audio Source settings?
Let me check the knowledge base for you. One moment! :books:
----------------------
The behavior you're observing might be due to how RealtimeAvatarVoice manages the AudioSource component. According to the NormalVR documentation (https://normcore.io/documentation/realtime/xr/voicechat), RealtimeAvatarVoice creates an AudioSource component at runtime if one is not present. However, if you add your own AudioSource component, RealtimeAvatarVoice should detect and use that instance instead.
The first avatar seems to be using the AudioSource you added, but since it's not owned by a remote client, it's not receiving any audio to play back, hence the AudioClip field is "None".
The second avatar, which is owned by a remote client, is creating an AudioOutput component to play back audio. It seems like it's creating a new AudioSource, which is why the pitch is reset to 1 and the AudioClip is set to "Normcore Audio Stream".
To ensure that RealtimeAvatarVoice uses your AudioSource with the desired settings, you might need to modify the RealtimeAvatarVoice script. The NormalVR documentation (https://normcore.io/documentation/realtime/xr/avatars#modifying-realtimeavatar-scripts) suggests copying the RealtimeAvatar classes into your project under a new namespace if you want to modify them.
Please note that modifying the RealtimeAvatarVoice script should be done with caution and a good understanding of the Normcore framework and Unity's audio system.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
<@666359106176679966> kapa.ai is still learning and improving, please let us know how it did by reacting below
We've added voice effects like this recently and its working well. We have an AudioSource attached to the same object as the voice script so that it will use the existing audio source instead of creating one.
For modifying the voice we use AudioMixer effects though. If you just increase/decrease the pitch of an audiosource it will play the audio clip back faster/slower, which is not ideal if you have voice data streaming in at a fixed rate. Theoretically, with a higher pitch you'd finish playing back the chunk of voice data before you receive the next chunk, causing gaps in the playback.
if by resets you mean it doesn't sync to other players that's because you have to create custom data sync models if you want to sync anything other than transform in normcore
@liamc Ah great point on pitch affecting speed. I'm fiddling around but what effects/settings did you use in the AudioMixer to raise/lower the pitch while keeping the fixed speed? Thanks
I'll have a look when I'm back at work tomorrow and let you know (it's midnight here)
These are the effects we use, but not necessarily all of them at the same time. We have certain items that can change your voice in a certain way, so the effect parameters are synced via normcore and the group is updated.
We have a limit of 4 players per room and the voice effects affect an individual player temporarily, and effects can stack. So there's a separate mixer group per player position, so that one player's effects dont apply to all players.
Thanks for the help! Much appreciated
No problem!