I'm trying to use Normcore to send audio source data over multiplayer.
I specifically don't want the microphone data, I've recorded clips using the Microphone and playing them back and I'd like that playback to be synced. Is there any way for me to create an Audio Stream for each audio source I create?
9 Replies
Hi @Gregory Osborne. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave:
----------------------
Yes, Normcore provides a way to create audio streams for each audio source you create. You can use the
CreateAudioInputStream()
API to create an AudioInputStream
that can be used to send audio data to all clients. Once called, all clients will be able to retrieve a corresponding AudioOutputStream
object using the GetAudioOutputStream()
API along with the clientID
and streamID
from the AudioInputStream
object.
Here is the relevant excerpt from the Normcore documentation:
Normcore provides high-quality, low-latency audio streams viaPlease note that Normcore's new MediaStream API is now available in private preview for Normcore Private (https://normcore.io/normcore-private) customers. If you would like to use video streaming in your Normcore Private app, get in touch with your account manager for more information. You can find more details in the Normcore documentation (https://normcore.io/documentation/room/room#media-streams). ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------CreateAudioInputStream()
(https://normcore.io/documentation/reference/classes/Normal.Realtime.Room). Calling this API returns anAudioInputStream
that can be used to send audio data to all clients. Once called, all clients will be able to retrieve a correspondingAudioOutputStream
object. To do so, they should useGetAudioOutputStream()
(https://normcore.io/documentation/reference/classes/Normal.Realtime.Room) along with theclientID
andstreamID
from theAudioInputStream
object.
<@345432116600766475> kapa.ai is still learning and improving, please let us know how it did by reacting below
If you want playback to be sync’d and the audio is pre-recorded, it may be easier to sync the file with something like S3 and then use the Timer sample to synchronize the playback time
yeah, I currently have a metronome system that looks like it's synced between users, it's using the room time to define a global downbeat, I'm just currently at the stage where once I've recorded an audio clip (and it's been aligned to my metronome) I'd like to be able to play that data through the server. Taking a look at Kapa's suggestion now.
The documentation isn't exactly clear on what the "GetAudioOutputStream()" function does. Will it automatically start playing audio from an audio source it creates, or do I have to use the output of that function and feed it into some audio source somewhere for it to play. How do I use the OutputStream for playing back audio?
I'd take a look at how RealtimeAvatarVoice and AudioOutput use the API. That's your best bet.
That said, it's worth noting that the Normcore audio stream system is going to apply its own jitterbuffering to ensure smooth playback. However, this does mean that if the audio is sent by a host at a specific timestamp, it may not play back on the receiving end in sync. Playback position will be determined by the quality of the receiver's internet connection.
would you say a better solution would be to send the audio clip data as a synced array of floats and rebuild the clip on the other end with all the necessary information?
When I call
AudioInputStream audioInputStream = realtime.room.CreateAudioInputStream(false, 48000, 1);
It seems that my StreamID is set to -1. What could be the reason for this? I notice in the RealtimeAvatarVoice script you check to make sure the stream id isn't less than 0. Am I calling the CreateAudioInputStream function wrong? what does an invalid stream id even mean?
I would upload it as an mp3/opus file to S3. Raw floats arrays are going to be /massive/
I believe you have to wait for it to be assigned by the server
I have finally figured out the input and output stream stuff, my audio clips are in fact sending data through those streams. However, the audio as it is sent gets very glitchy. I can tell that it's sending my audio because the volume of the glitchy noise increases in time with the audio as it's playing in a different place. My main question is about audioInputStream.SendRawAudioData(inputStreamFrameData);
and
audioInputStream.SendQueuedMessages();
Currently I have a function that's called in Update(). This function uses audioSource.GetOutputData(inputStreamFrameData, 1);
Then, if (inputStreamFrameData != null)
I call audioInputStream.SendRawAudioData(inputStreamFrameData);
After this I also call audioInputStream.SendQueuedMessages();
I'm not sure if I should be doing this in a different place or a different way. Not entirely sure what the difference is between SendRawAudioData and SendQueuedMessages or how they're supposed to work
You need to ensure you’re sending audio at a constant rate for playback. The stream is going to use the rate that packets arrive to determine how much audio to buffer
the first one compresses and queues audio to go out, the next one combines them into a single packet and sends it