Re: [music] concert played live from different locations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for listening!

On 28.03.21 11:28, Fons Adriaensen wrote:
> Hello Giso,
>  
>> tonight (March 26th 2021, 7pm CET) we will play a concert with five
>> musicians at four different locations, using our ovbox system.
> 
> I really enjoyed the concert very much, many thanks !
> 
> Reading the ovbox documentation on Github it's not clear to me how
> it really works - how the audio is routed. You mention both STUN and
> TURN servers which suggest different topologies.

Actually we can mix both topologies, because STUN does not work with all
NAT routers (we didn't manage to get it working with mobile networks).
Some clients can use peer-to-peer mode with a STUN server, others can
join via a TURN server. There is a sketch on my LAC'20 presentation:
http://lac2020.orlandoviols.com/#c12

I have to admit that I am not a network expert, I just added the minimal
steps I needed to make it work.

> 
> In my days in broadcasting (long ago), a multi-location concert
> would be done as
> 
> - Each participant send his contribution to a central studio.
> - The studio provides a separate mix for each location (usually
>   just an N-1 of the master mix) and sends this back.>
> So this requires a bidirectinal linkg between each participant and
> the studio, and there are no direct connections between the different
> locations
> 
> The other way would be to have direct audio streams between all
> participants, and each makes his own mix. This provides less latency
> but doesn't scale linearly. Since you provide head-tracking, I suspect
> this is how ovbox works ?

Yes, we always make the mix locally (even when the audio is routed via
the server). This creates a heavy impact on bandwidth requirements when
used with many participants, thus the ovbox works only with relatively
small ensembles. The maximum successfully tested number of participants
was 12, using a Raspberry Pi.

For the concert stream I used a desktop PC with higher performance to
create the mix. Since under the hood we use zita-njbridge (with the
network data repackaged for the distribution to other peers and/or
server), we have one instance of zita-n2j for each musician except the
local musicians, which are then fed to the local mix for streaming.

Regarding the head tracking we have also two modes: The head tracking
can not only affect the way we hear the sound, but also with directional
sound sources the ratio between direct and reverberant sound, which
could be an interesting effect for singers (right now it is more used as
a toy). In the latter case we send the orientation data to all clients
to change the rendering accordingly.

Best,

Giso

> 
> Ciao,
> 
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
https://lists.linuxaudio.org/listinfo/linux-audio-user




[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux