Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2018/11/9 上午11:58, Michael S. Tsirkin wrote:
On Fri, Nov 09, 2018 at 10:25:28AM +0800, Jason Wang wrote:
On 2018/11/8 下午10:14, Michael S. Tsirkin wrote:
On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
On 2018/11/8 上午9:38, Tiwei Bie wrote:
+
+	if (vq->vq.num_free < descs_used) {
+		pr_debug("Can't add buf len %i - avail = %i\n",
+			 descs_used, vq->vq.num_free);
+		/* FIXME: for historical reasons, we force a notify here if
+		 * there are outgoing parts to the buffer.  Presumably the
+		 * host should service the ring ASAP. */
I don't think we have a reason to do this for packed ring.
No historical baggage there, right?
Based on the original commit log, it seems that the notify here
is just an "optimization". But I don't quite understand what does
the "the heuristics which KVM uses" refer to. If it's safe to drop
this in packed ring, I'd like to do it.
According to the commit log, it seems like a workaround of lguest networking
backend. I agree to drop it, we should not have such burden.

But we should notice that, with this removed, the compare between packed vs
split is kind of unfair.
I don't think this ever triggers to be frank. When would it?

I think it can happen e.g in the path of XDP transmission in
__virtnet_xdp_xmit_one():


         err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC);
         if (unlikely(err))
                 return -ENOSPC; /* Caller handle free/refcnt */

I see. We used to do it for regular xmit but stopped
doing it. Is it fine for xdp then?


There's no traffic control in XDP, so it was the only thing we can do.



Consider the removal of lguest support recently,
maybe we can drop this for split ring as well?

Thanks
If it's helpful, then for sure we can drop it for virtio 1.
Can you see any perf differences at all? With which device?

I don't test but consider the case of XDP_TX in guest plus vhost_net in
host. Since vhost_net is half duplex, it's pretty easier to trigger this
condition.

Thanks
Sounds reasonable. Worth testing before we change things though.


Let me test and submit a patch.

Thanks



commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
Author: Rusty Russell<rusty@xxxxxxxxxxxxxxx>
Date:   Fri Jul 25 12:06:04 2008 -0500

       virtio: don't always force a notification when ring is full
       We force notification when the ring is full, even if the host has
       indicated it doesn't want to know.  This seemed like a good idea at
       the time: if we fill the transmit ring, we should tell the host
       immediately.
       Unfortunately this logic also applies to the receiving ring, which is
       refilled constantly.  We should introduce real notification thesholds
       to replace this logic.  Meanwhile, removing the logic altogether breaks
       the heuristics which KVM uses, so we use a hack: only notify if there are
       outgoing parts of the new buffer.
       Here are the number of exits with lguest's crappy network implementation:
       Before:
               network xmit 7859051 recv 236420
       After:
               network xmit 7858610 recv 118136
       Signed-off-by: Rusty Russell<rusty@xxxxxxxxxxxxxxx>

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 72bf8bc09014..21d9a62767af 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
    	if (vq->num_free < out + in) {
    		pr_debug("Can't add buf len %i - avail = %i\n",
    			 out + in, vq->num_free);
-		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
-		vq->notify(&vq->vq);
+		/* FIXME: for historical reasons, we force a notify here if
+		 * there are outgoing parts to the buffer.  Presumably the
+		 * host should service the ring ASAP. */
+		if (out)
+			vq->notify(&vq->vq);
    		END_USE(vq);
    		return -ENOSPC;
    	}


_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux