Hi, today I was trying to do some (quick) tests for the Glib code trying to play with bandwidth and latency. I noted that the net test was giving very strange results. With a delay of 1ms and a bandwidth of 400kb/s was returning 325ms as roundtrip and a very high bandwidth :( After a bit of digging I realized that net test is done sending 3 pings (using spice protocol) - 1 ping (warmup) of 0 byte data - 1 ping (latency) of 0 bytes - 1 ping (bandwidth) or 250 kb Ping messages contains a timestamp (usec) returned verbatim from the client. Doing a strace I noted that all ping replies were returned at the same time. Taking into account that items are queued in the stream quite fast and that roundtrip is compiled using current time - timestamp from ping reply this make basically the roundtrip computation equal to the total roundtrip of all pings so code thinks that latency is high and bandwidth (computed as data per extra time after roundtrip) too. More strace, involving remote-viewer (Fedora 22 one) reveals that Nagle algorithm on the client is not disabled to client queue replies to socket but kernel send after all replies are queued giving the huge roundtrip! So at least while sending the ping replies Nagle algorithm should be disabled. I'm trying to mitigate this issue using Linux tcp information (see TCP_INFO on tcp(7) man page), the tcpi_rtt field. If a bit higher (133ms instead of 1ms) but better than 325ms. Now, I'm not that used to client code and just enabling this flag could lead to excessive network packets sent to the server (potentially every byte). Somebody skilled with client code? Frediano _______________________________________________ Spice-devel mailing list Spice-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/spice-devel