On 2018/10/17 下午5:39, Jason Wang wrote:
Hi Jason and Stefan,
Maybe I find the reason of bad performance.
I found pkt_len is limited to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE(4K),
it will cause the bandwidth is limited to 500~600MB/s. And once I
increase to 64k, it can improve about 3 times(~1500MB/s).
Looks like the value was chosen for a balance between rx buffer size
and performance. Allocating 64K always even for small packet is kind
of waste and stress for guest memory. Virito-net try to avoid this by
inventing the merge able rx buffer which allows big packet to be
scattered in into different buffers. We can reuse this idea or revisit
the idea of using virtio-net/vhost-net as a transport of vsock.
What interesting is the performance is still behind vhost-net.
Thanks
By the way, I send to 64K in application once, and I don't use
sg_init_one and rewrite function to packet sg list because pkt_len
include multiple pages.
Thanks,
Yiwen.
Btw, if you're using vsock for transferring large files, maybe it's more
efficient to implement sendpage() for vsock to allow sendfile()/splice()
work.
Thanks