Hi!
Not sure, if I'm on the right place, but I guess the LAU-people are
trained to find solutions to extraordinary problems…
I have a vision 8-) :
I'm sitting at FOH, driving a theater show. I have - let's say - 3
projectors available. One on my back to cover the stage from the front,
two behind the stage doing a rear projection on the right and the left.
On every projector, there is a Raspberry Pi connected via HDMI, waiting
to send videos to the projector. All Raspberries are connected to LAN,
just like my linux-laptop from which the show is controlled.
I know, I can reach similar with QLC+: Install the app on all the
computers involved, setup 3 different Artnet-channels, configure a/some
video function(s) and make each one accessible through a dmx-channel.
Therefore, the videos that should be presented have to be on the
Raspberries. I can copy them to the devices and configure the triggers
on each before the show.
But my goals are different: Keep it simple, keep it fast (in terms of
latency, but also in terms of using light and fast apps and finally. in
terms of not running through the venue to make some last-minute
configurations) and let only one machine be the one that has to be
configured - the main laptop at FOH.
I'm not so far away from that - the tools and the technology seem to be
there, already. With ffmpeg for example it's possible to stream videos
from point to point in realtime.
[code]ffmpeg -i [input-video] -f [streaming codec to use]
udp://[reciever's network-adress]:[port][/code]
(There are options to speed thing up and/or relieve the CPU, but take it
as an easy example.) On the other side of the chain, ffplay or mpv can
catch the stream and decode it in no time.
[code]mpv udp://[transmitter's network-adress]:[port][/code]
(Again: Optimizations left aside)
Tried this myself in a LAN between a Ryzen5 2400G Desktop and a 10 year
old Thinkpad and achieved latencies under 1s - which is good enough,
even for professional use. Once, you found the best options for your
setup you can use it over and over again with different video-inputs and
destinations. Best of it: With a commandline code it's capable of being
integrated in QLC+ or Linux Show Player (LiSP). And: With ffmpeg I can
-tee video from audio stream, if I like, and keep the audio at the FOH.
(Or send it back from one of the raspberries to FOH via net-jack or
comparable. Keeping video and audio in sync will be another challenge, I
see…)
But there is one downside: If the receiver already plays the video,
there is no big latency between sender and receiver (if the options are
chosen well, of course). But: Catching the stream can take several
seconds. So, what I need is a continuous stream on which I can send my
videos. OBS can do this, but it's another resource intensive app and -
as far as I know - I cannot send commands from QLC+ or LiSP to it. (I
want ONE cue-player for all, you know…!) Also: I *guess* OBS can't
handle more than 1 stream, at once (sending to the different
RPi-receivers) - but with ffmpeg-commands it's easy…!
I had the idea, sending a continuous stream by streamcasting a virtual
desktop page and configure mpv to play on that in fullscreen, by demand.
But I guess, this comes not so handy with more than one beamer.
Any ideas in how to reach my goals? (You can suggest other apps than
ffmpeg or mpv, of course!)
(Disclaimer: I have also posted this to the
Linux-Audio-Users-Mailinglist and will try to send it to a place, where
ffmpeg-nerds are common. I will inform you if I get good thoughts from
the other sources…)
Greets!
Mitsch
_______________________________________________
Linux-audio-user mailing list -- linux-audio-user@xxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to linux-audio-user-leave@xxxxxxxxxxxxxxxxxxxx