Re: [RFC PATCH spice-server] Send real time to client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3 Oct 2017, at 15:49, Frediano Ziglio <fziglio@xxxxxxxxxx> wrote:


On 2 Oct 2017, at 15:43, Frediano Ziglio <fziglio@xxxxxxxxxx> wrote:

Related problem with audio.

Look at that video
https://www.youtube.com/watch?v=bn5iczqLBmk
basically for a short while while playing a video with audio network is
disconnected.
This create a delay (see the delay when moving the mouse, I had 2 cursor so
show
the delay).
The server cursor follow with a huge delay which disappears when audio is
disabled.
This is caused by some reasons:
- communication is using tcp so packets are retransmitted, not discarded;
- audio packets are never discarded;

That part seems the easiest to address, no?


The current design is "we don't care about the delay".
If we don't agree on the design change the current behaviour is fine.

Mmmmh, sorry, without facial expressions, it’s hard for me to tell if you are
serious or sarcastic here. I thought I understood you wanted to address the
latency problem. Doesn’t that imply a change in design?
Well, I'm not the king of the castle. Considering that there are plenty of code to
optimize the "movie night" use case and that people in the team in months never
suggested to remove that code I can't see a strong design change. Not that the
current code is easy to understand and change.

Personally from this patch and my videos (playing games) you can easily
guess my opinion. I would personally merge this patch and work on reducing
the latency on the client to a minimum.

- video is synchronized on audio.

In case we decide to drop audio packets and re-sync, we also need an I-frame.


I don't see why. We are not dropping video frames or any video information.

Maybe I got confused by your proposal. Are you suggesting we investigate how
to reduce the delay? Or just sharing observations about where the delay comes from?
I would say both.

You can understand that the delay can became so high that is hard to
control the VM.

Yes ;-)

Usually for communications (like mobile phones or similar) when a network
problem happens we ears noises or silences, just packets are discarded
(or corrupted).

I think that dropping audio packets makes sense. But then, should we send
audio and video over UDP instead of TCP?


TCP will catch up. No reasons for UDP. Yes, UDP would do the drop on its
own avoiding retransmissions but you have to change all the entire protocol,
design and implementation (not saying is not worth investigating).

OK. I was only observing that TCP only makes sense if we prefer packets not
to be dropped. By having TCP retried (and associated delays) only to drop the
packets in the end makes little sense to me.
In this case would be audio packets... messages if you prefer. There must be some
reasons why YouTube, Facebook and other streaming websites uses tcp.
Replacing tcp with udp means you have to deal with a lot of problems.
If you want a reliable connection then you'll end up reimplementing tcp.
If you accepts a not reliable connection it means you have to deal with errors, changing
protocol, compression, inserting redundancy (so spending more bandwidth) and
data synchronization points.

So maybe I got confused and you were not suggesting we drop packets. Then
what were you proposing? Or were you not proposing anything and just asking
around to get ideas?
Was not referring at network packets. Audio is transmitted in chunks compressed
and sent inside spice messages. There messages are queued and synchronization is
based on them. I was suggesting to discard them if the queue reach some threshold.



I think this should be the behaviour instead of introduce long buffering
and delays.

Agreed, though that depends on the delay. Some apps like BlueJeans deal with
relatively short interruption by pausing and then playing catch-up (i.e.
playback is
faster than real-time until it catches actual time)


Yes, you are right, trying to catch up (video but also audio) is an option
(limiting however the delay).

Yes. The 500ms below were that limit in my mind.
Ok, so 500ms as an upper limit, not a lower one (like happens during normal
streaming buffering).


Basically means that in the client we should limit the
queue to 1/2 audio packets and a new packet will replace the queued one.

I think that the number of audio packets should still be larger than 1 or 2.
The lower that number, the higher the risk of hearing noticeable clicks.
I’d say we should probably allow for ~500ms of buffered audio.


This plus the video sync means that you click a button (or whatever action)
and you'll see the results after 1/2 seconds. Can be acceptable for a office
usage, surely not for animations, games or any possible realtime simulation
would be impossible.

I was not suggesting that we accept a 500ms delay, but rather that this
be the maximum acceptable delay before we take harder measures,
like a restart. I think 0.5s is acceptable if it only happens a couple of times
a day when you lost Wi Fi.
I won't personally live with that for long. I would try to catch up.

Some notes about this patch that are not easy to see.
The removed code basically introduces a minimum 400ms delay on multimedia
(audio/video) data. Note that part of graphics is video and part graphic commands
which potentially create different artifacts in the client.
But the removal of this code cause some changes on streaming reports used for
video bandwidth adjustment. The reason is that the num_drops and last_frame_delay
depends on the difference between message multimedia time and the client concept
of the multimedia time. So potentially we'll have a smaller delay (last_frame_delay)
and more "dropped" frames. I quoted "dropped" as the computation of this value
is more dependent on the network delay (network latency and time spent sending
the video/audio messages). Is true that for mjpeg the frame is dropped (not
displayed). On this not displaying it's quite odd. We have some video information
but if it arrives too late is dropped but basically we are dropping the last frame
without knowing if we'll have other data to display soon.
Also if network conditions changed and the latency is bigger this can cause
lot of frames to be dropped. To reduce CPU usage would be better to queue
compressed frames and drop old ones instead of dropping new ones.
About the client concept of multimedia time. The server sends some messages
to tell its time (with a -400ms skew) and client try to use this however with
audio every seconds the client set the time based on the audio time (which
has not the skew). So this patch avoids the client code to have 2 different
concepts of time (skewed or not based on audio) which seems a good reason
to have this patch.
Back to the delay, frames dropped and stream reports not sure how the
bandwidth adjustment is affected (I'm actually trying this patch with the
streaming agent which does not do any adjustment). On the other hand
if audio is on the behaviour should be more similar.

I think 50ms is a good target.
Unless of course we agree to drop the audio/video sync.


Thanks
Christophe


Christophe



Frediano




Do not offset the time attempting to fix client latency.
Client should handle it by itself.

This patch is not designed to be merged but more for discussion.

This remove entirely the delay introduced by the server.
This avoids surely possible time drifts in the client.
The server just sends it's concept of time without trying to force any
delay. I think only one end should handle this delay in an attempt to
synchronize audio and video instead that doing it in both ends.

I'm currently trying it and the responsiveness is much better.

One think I noted is however what happens if I block the connection.
Think about a momentary disconnection like when you disconnect the a
cable between the client and the server or when you drop every packets
between them with a firewall for a short while (this can happens in
many cases like a restart of a network device, a small fault of one or
simply you loose signal to your wifi network). The audio restart when
the connection is fixed (TCP take care of it) but everything then is
delayed. Looks like instead of catching up the delay is maintained.
Actually stopping the audio/video remove the delay.

Signed-off-by: Frediano Ziglio <fziglio@xxxxxxxxxx>
---
server/main-dispatcher.c | 28 ----------------------------
server/main-dispatcher.h |  1 -
server/reds-private.h    |  2 --
server/reds.c            | 25 +++----------------------
server/reds.h            |  1 -
server/sound.c           | 26 --------------------------
server/sound.h           |  2 --
server/stream.c          |  7 -------
8 files changed, 3 insertions(+), 89 deletions(-)

diff --git a/server/main-dispatcher.c b/server/main-dispatcher.c
index 43f8b79a7..7ecb53f4b 100644
--- a/server/main-dispatcher.c
+++ b/server/main-dispatcher.c
@@ -146,7 +146,6 @@ main_dispatcher_init(MainDispatcher *self)
enum {
   MAIN_DISPATCHER_CHANNEL_EVENT = 0,
   MAIN_DISPATCHER_MIGRATE_SEAMLESS_DST_COMPLETE,
-    MAIN_DISPATCHER_SET_MM_TIME_LATENCY,
   MAIN_DISPATCHER_CLIENT_DISCONNECT,

   MAIN_DISPATCHER_NUM_MESSAGES
@@ -214,15 +213,6 @@ static void
main_dispatcher_handle_migrate_complete(void
*opaque,
   g_object_unref(mig_complete->client);
}

-static void main_dispatcher_handle_mm_time_latency(void *opaque,
-                                                   void *payload)
-{
-    MainDispatcher *self = opaque;
-    MainDispatcherMmTimeLatencyMessage *msg = payload;
-    reds_set_client_mm_time_latency(self->priv->reds, msg->client,
msg->latency);
-    g_object_unref(msg->client);
-}
-
static void main_dispatcher_handle_client_disconnect(void *opaque,
                                                    void *payload)
{
@@ -249,21 +239,6 @@ void
main_dispatcher_seamless_migrate_dst_complete(MainDispatcher *self,
                           &msg);
}

-void main_dispatcher_set_mm_time_latency(MainDispatcher *self, RedClient
*client, uint32_t latency)
-{
-    MainDispatcherMmTimeLatencyMessage msg;
-
-    if (pthread_self() == dispatcher_get_thread_id(DISPATCHER(self))) {
-        reds_set_client_mm_time_latency(self->priv->reds, client,
latency);
-        return;
-    }
-
-    msg.client = g_object_ref(client);
-    msg.latency = latency;
-    dispatcher_send_message(DISPATCHER(self),
MAIN_DISPATCHER_SET_MM_TIME_LATENCY,
-                            &msg);
-}
-
void main_dispatcher_client_disconnect(MainDispatcher *self, RedClient
*client)
{
   MainDispatcherClientDisconnectMessage msg;
@@ -318,9 +293,6 @@ void main_dispatcher_constructed(GObject *object)
   dispatcher_register_handler(DISPATCHER(self),
   MAIN_DISPATCHER_MIGRATE_SEAMLESS_DST_COMPLETE,
                               main_dispatcher_handle_migrate_complete,
                               sizeof(MainDispatcherMigrateSeamlessDstCompleteMessage),
                               false);
-    dispatcher_register_handler(DISPATCHER(self),
MAIN_DISPATCHER_SET_MM_TIME_LATENCY,
-                                main_dispatcher_handle_mm_time_latency,
-
sizeof(MainDispatcherMmTimeLatencyMessage),
false);
   dispatcher_register_handler(DISPATCHER(self),
   MAIN_DISPATCHER_CLIENT_DISCONNECT,
                               main_dispatcher_handle_client_disconnect,
                               sizeof(MainDispatcherClientDisconnectMessage),
                               false);
diff --git a/server/main-dispatcher.h b/server/main-dispatcher.h
index 088a5c216..c49d40677 100644
--- a/server/main-dispatcher.h
+++ b/server/main-dispatcher.h
@@ -51,7 +51,6 @@ GType main_dispatcher_get_type(void) G_GNUC_CONST;

void main_dispatcher_channel_event(MainDispatcher *self, int event,
SpiceChannelEventInfo *info);
void main_dispatcher_seamless_migrate_dst_complete(MainDispatcher *self,
RedClient *client);
-void main_dispatcher_set_mm_time_latency(MainDispatcher *self, RedClient
*client, uint32_t latency);
/*
* Disconnecting the client is always executed asynchronously,
* in order to protect from expired references in the routines
diff --git a/server/reds-private.h b/server/reds-private.h
index 259496c64..056264c22 100644
--- a/server/reds-private.h
+++ b/server/reds-private.h
@@ -29,7 +29,6 @@
#include "red-record-qxl.h"

#define MIGRATE_TIMEOUT (MSEC_PER_SEC * 10)
-#define MM_TIME_DELTA 400 /*ms*/

typedef struct TicketAuthentication {
   char password[SPICE_MAX_PASSWORD_LENGTH];
@@ -123,7 +122,6 @@ struct RedsState {
   SpiceBuffer client_monitors_config;

   int mm_time_enabled;
-    uint32_t mm_time_latency;

   SpiceCharDeviceInstance *vdagent;
   SpiceMigrateInstance *migration_interface;
diff --git a/server/reds.c b/server/reds.c
index 97f9219fe..633fdcba4 100644
--- a/server/reds.c
+++ b/server/reds.c
@@ -1838,7 +1838,7 @@ static void reds_handle_main_link(RedsState *reds,
RedLinkInfo *link)
   if (!mig_target) {
       main_channel_client_push_init(mcc,
       g_list_length(reds->qxl_instances),
           reds->mouse_mode, reds->is_client_mouse_allowed,
-            reds_get_mm_time() - MM_TIME_DELTA,
+            reds_get_mm_time(),
           reds_qxl_ram_size(reds));
       if (reds->config->spice_name)
           main_channel_client_push_name(mcc, reds->config->spice_name);
@@ -1971,7 +1971,7 @@ void
reds_on_client_semi_seamless_migrate_complete(RedsState *reds, RedClient
*c
   main_channel_client_push_init(mcc, g_list_length(reds->qxl_instances),
                                 reds->mouse_mode,
                                 reds->is_client_mouse_allowed,
-                                  reds_get_mm_time() - MM_TIME_DELTA,
+                                  reds_get_mm_time(),
                                 reds_qxl_ram_size(reds));
   reds_link_mig_target_channels(reds, client);
   main_channel_client_migrate_dst_complete(mcc);
@@ -2670,25 +2670,7 @@ static void reds_send_mm_time(RedsState *reds)
       return;
   }
   spice_debug("trace");
-    main_channel_push_multi_media_time(reds->main_channel,
-                                       reds_get_mm_time() -
reds->mm_time_latency);
-}
-
-void reds_set_client_mm_time_latency(RedsState *reds, RedClient *client,
uint32_t latency)
-{
-    // TODO: multi-client support for mm_time
-    if (reds->mm_time_enabled) {
-        // TODO: consider network latency
-        if (latency > reds->mm_time_latency) {
-            reds->mm_time_latency = latency;
-            reds_send_mm_time(reds);
-        } else {
-            spice_debug("new latency %u is smaller than existing %u",
-                        latency, reds->mm_time_latency);
-        }
-    } else {
-        snd_set_playback_latency(client, latency);
-    }
+    main_channel_push_multi_media_time(reds->main_channel,
reds_get_mm_time());
}

static int reds_init_net(RedsState *reds)
@@ -3083,7 +3065,6 @@ uint32_t reds_get_mm_time(void)
void reds_enable_mm_time(RedsState *reds)
{
   reds->mm_time_enabled = TRUE;
-    reds->mm_time_latency = MM_TIME_DELTA;
   reds_send_mm_time(reds);
}

diff --git a/server/reds.h b/server/reds.h
index 264ef205c..9b6c1e6ae 100644
--- a/server/reds.h
+++ b/server/reds.h
@@ -94,7 +94,6 @@ void
reds_on_client_semi_seamless_migrate_complete(RedsState *reds, RedClient
*c
void reds_on_client_seamless_migrate_complete(RedsState *reds, RedClient
*client);
void reds_on_main_channel_migrate(RedsState *reds, MainChannelClient
*mcc);

-void reds_set_client_mm_time_latency(RedsState *reds, RedClient *client,
uint32_t latency);
uint32_t reds_get_streaming_video(const RedsState *reds);
GArray* reds_get_video_codecs(const RedsState *reds);
spice_wan_compression_t reds_get_jpeg_state(const RedsState *reds);
diff --git a/server/sound.c b/server/sound.c
index 9073626cd..996bb5208 100644
--- a/server/sound.c
+++ b/server/sound.c
@@ -972,32 +972,6 @@ SPICE_GNUC_VISIBLE void
spice_server_playback_put_samples(SpicePlaybackInstance
   snd_send(SND_CHANNEL_CLIENT(playback_client));
}

-void snd_set_playback_latency(RedClient *client, uint32_t latency)
-{
-    GList *l;
-
-    for (l = snd_channels; l != NULL; l = l->next) {
-        SndChannel *now = l->data;
-        SndChannelClient *scc = snd_channel_get_client(now);
-        uint32_t type;
-        g_object_get(RED_CHANNEL(now), "channel-type", &type, NULL);
-        if (type == SPICE_CHANNEL_PLAYBACK && scc &&
-            red_channel_client_get_client(RED_CHANNEL_CLIENT(scc)) ==
client) {
-
-            if
(red_channel_client_test_remote_cap(RED_CHANNEL_CLIENT(scc),
-                SPICE_PLAYBACK_CAP_LATENCY)) {
-                PlaybackChannelClient* playback =
(PlaybackChannelClient*)scc;
-
-                playback->latency = latency;
-                snd_set_command(scc, SND_PLAYBACK_LATENCY_MASK);
-                snd_send(scc);
-            } else {
-                spice_debug("client doesn't not support
SPICE_PLAYBACK_CAP_LATENCY");
-            }
-        }
-    }
-}
-
static int snd_desired_audio_mode(bool playback_compression, int
frequency,
                                 bool client_can_celt, bool
                                 client_can_opus)
{
diff --git a/server/sound.h b/server/sound.h
index 2f0a2b93d..75d6c78cd 100644
--- a/server/sound.h
+++ b/server/sound.h
@@ -30,6 +30,4 @@ void snd_detach_record(SpiceRecordInstance *sin);

void snd_set_playback_compression(bool on);

-void snd_set_playback_latency(struct RedClient *client, uint32_t
latency);
-
#endif /* SOUND_H_ */
diff --git a/server/stream.c b/server/stream.c
index 71755ea1f..ca0b49170 100644
--- a/server/stream.c
+++ b/server/stream.c
@@ -650,9 +650,6 @@ static void update_client_playback_delay(void *opaque,
uint32_t delay_ms)
{
   StreamAgent *agent = opaque;
   DisplayChannelClient *dcc = agent->dcc;
-    RedChannel *channel =
red_channel_client_get_channel(RED_CHANNEL_CLIENT(dcc));
-    RedClient *client =
red_channel_client_get_client(RED_CHANNEL_CLIENT(dcc));
-    RedsState *reds = red_channel_get_server(channel);

   dcc_update_streams_max_latency(dcc, agent);

@@ -660,10 +657,6 @@ static void update_client_playback_delay(void
*opaque,
uint32_t delay_ms)
   if (delay_ms > dcc_get_max_stream_latency(dcc)) {
       dcc_set_max_stream_latency(dcc, delay_ms);
   }
-    spice_debug("resetting client latency: %u",
dcc_get_max_stream_latency(dcc));
-    main_dispatcher_set_mm_time_latency(reds_get_main_dispatcher(reds),
-                                        client,
-
dcc_get_max_stream_latency(agent->dcc));
}

static void bitmap_ref(gpointer data)

_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/spice-devel

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]