Hi Bobby,
On Fri, Apr 14, 2023 at 11:18:40AM +0000, Bobby Eshleman wrote:
CC'ing Cong.
On Fri, Apr 14, 2023 at 12:25:56AM +0000, Bobby Eshleman wrote:
Hey all!
This series introduces support for datagrams to virtio/vsock.
Great! Thanks for restarting this work!
It is a spin-off (and smaller version) of this series from the summer:
https://lore.kernel.org/all/cover.1660362668.git.bobby.eshleman@xxxxxxxxxxxxx/
Please note that this is an RFC and should not be merged until
associated changes are made to the virtio specification, which will
follow after discussion from this series.
This series first supports datagrams in a basic form for virtio, and
then optimizes the sendpath for all transports.
The result is a very fast datagram communication protocol that
outperforms even UDP on multi-queue virtio-net w/ vhost on a variety
of multi-threaded workload samples.
For those that are curious, some summary data comparing UDP and VSOCK
DGRAM (N=5):
vCPUS: 16
virtio-net queues: 16
payload size: 4KB
Setup: bare metal + vm (non-nested)
UDP: 287.59 MB/s
VSOCK DGRAM: 509.2 MB/s
Some notes about the implementation...
This datagram implementation forces datagrams to self-throttle according
to the threshold set by sk_sndbuf. It behaves similar to the credits
used by streams in its effect on throughput and memory consumption, but
it is not influenced by the receiving socket as credits are.
So, sk_sndbuf influece the sender and sk_rcvbuf the receiver, right?
We should check if VMCI behaves the same.
The device drops packets silently. There is room for improvement by
building into the device and driver some intelligence around how to
reduce frequency of kicking the virtqueue when packet loss is high. I
think there is a good discussion to be had on this.
Can you elaborate a bit here?
Do you mean some mechanism to report to the sender that a destination
(cid, port) is full so the packet will be dropped?
Can we adapt the credit mechanism?
In this series I am also proposing that fairness be reexamined as an
issue separate from datagrams, which differs from my previous series
that coupled these issues. After further testing and reflection on the
design, I do not believe that these need to be coupled and I do not
believe this implementation introduces additional unfairness or
exacerbates pre-existing unfairness.
I see.
I attempted to characterize vsock fairness by using a pool of processes
to stress test the shared resources while measuring the performance of a
lone stream socket. Given unfair preference for datagrams, we would
assume that a lone stream socket would degrade much more when a pool of
datagram sockets was stressing the system than when a pool of stream
sockets are stressing the system. The result, however, showed no
significant difference between the degradation of throughput of the lone
stream socket when using a pool of datagrams to stress the queue over
using a pool of streams. The absolute difference in throughput actually
favored datagrams as interfering least as the mean difference was +16%
compared to using streams to stress test (N=7), but it was not
statistically significant. Workloads were matched for payload size and
buffer size (to approximate memory consumption) and process count, and
stress workloads were configured to start before and last long after the
lifetime of the "lone" stream socket flow to ensure that competing flows
were continuously hot.
Given the above data, I propose that vsock fairness be addressed
independent of datagrams and to defer its implementation to a future
series.
Makes sense to me.
I left some preliminary comments, anyway now it seems reasonable to use
the same virtqueues, so we can go head with the spec proposal.
Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization