[RFC PATCH TRIVIAL] Reading the virtio code...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Rob Landley <rlandley@xxxxxxxxxxxxx>

Going indirect for only two buffers isn't likely to be a performance win
because the kmalloc/kfree overhead for the indirect block can't be cheaper
than one extra linked list traversal.

Properly "tuning" the threshold would probably be workload-specific.
(One big downside of not going indirect is extra pressure on the table
entries, and table size varies.)  But I think that in the general case,
2 is a defensible minimum?

Signed-off-by: Rob Landley <rlandley@xxxxxxxxxxxxx>
---

 drivers/virtio/virtio_ring.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index b0043fb..2b69441 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -173,7 +173,7 @@ int virtqueue_add_buf_gfp(struct virtqueue *_vq,
 
 	/* If the host supports indirect descriptor tables, and we have multiple
 	 * buffers, then go indirect. FIXME: tune this threshold */
-	if (vq->indirect && (out + in) > 1 && vq->num_free) {
+	if (vq->indirect && (out + in) > 2 && vq->num_free) {
 		head = vring_add_indirect(vq, sg, out, in, gfp);
 		if (likely(head >= 0))
 			goto add_head;
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux