On 2018/11/5 下午3:43, jiangyiwen wrote:
Now vsock only support send/receive small packet, it can't achieve
high performance. As previous discussed with Jason Wang, I revisit the
idea of vhost-net about mergeable rx buffer and implement the mergeable
rx buffer in vhost-vsock, it can allow big packet to be scattered in
into different buffers and improve performance obviously.
I write a tool to test the vhost-vsock performance, mainly send big
packet(64K) included guest->Host and Host->Guest. The result as
follows:
Before performance:
Single socket Multiple sockets(Max Bandwidth)
Guest->Host ~400MB/s ~480MB/s
Host->Guest ~1450MB/s ~1600MB/s
After performance:
Single socket Multiple sockets(Max Bandwidth)
Guest->Host ~1700MB/s ~2900MB/s
Host->Guest ~1700MB/s ~2900MB/s
From the test results, the performance is improved obviously, and guest
memory will not be wasted.
Hi:
Thanks for the patches and the numbers are really impressive.
But instead of duplicating codes between sock and net. I was considering
to use virtio-net as a transport of vsock. Then we may have all existed
features likes batching, mergeable rx buffers and multiqueue. Want to
consider this idea? Thoughts?
---
Yiwen Jiang (5):
VSOCK: support fill mergeable rx buffer in guest
VSOCK: support fill data to mergeable rx buffer in host
VSOCK: support receive mergeable rx buffer in guest
VSOCK: modify default rx buf size to improve performance
VSOCK: batch sending rx buffer to increase bandwidth
drivers/vhost/vsock.c | 135 +++++++++++++++++++++++------
include/linux/virtio_vsock.h | 15 +++-
include/uapi/linux/virtio_vsock.h | 5 ++
net/vmw_vsock/virtio_transport.c | 147 ++++++++++++++++++++++++++------
net/vmw_vsock/virtio_transport_common.c | 59 +++++++++++--
5 files changed, 300 insertions(+), 61 deletions(-)
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization