On 2019/4/5 下午4:44, Stefan Hajnoczi wrote:
On Thu, Apr 04, 2019 at 12:58:38PM +0200, Stefano Garzarella wrote:
In order to increase host -> guest throughput with large packets,
we can use 64 KiB RX buffers.
Signed-off-by: Stefano Garzarella <sgarzare@xxxxxxxxxx>
---
include/linux/virtio_vsock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index 6d7a22cc20bf..43cce304408e 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -10,7 +10,7 @@
#define VIRTIO_VSOCK_DEFAULT_MIN_BUF_SIZE 128
#define VIRTIO_VSOCK_DEFAULT_BUF_SIZE (1024 * 256)
#define VIRTIO_VSOCK_DEFAULT_MAX_BUF_SIZE (1024 * 256)
-#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4)
+#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 64)
This patch raises rx ring memory consumption from 128 * 4KB = 512KB to
128 * 64KB = 8MB.
Michael, Jason: Any advice regarding rx/tx ring sizes and buffer sizes?
Depending on rx ring size and the workload's packet size, different
values might be preferred.
This could become a tunable in the future. It determines the size of
the guest driver's rx buffers.
In virtio-net, we have mergeable rx buffer and estimate the rx buffer
size through EWMA.
That's another reason I suggest to squash the vsock codes into virtio-net.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization