Re: [PATCH] virtio_ring: Fix the stale index in available ring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 19, 2024 at 02:09:34AM -0400, Michael S. Tsirkin wrote:
> On Tue, Mar 19, 2024 at 02:59:23PM +1000, Gavin Shan wrote:
> > On 3/19/24 02:59, Will Deacon wrote:
> > > On Thu, Mar 14, 2024 at 05:49:23PM +1000, Gavin Shan wrote:
> > > > The issue is reported by Yihuang Yu who have 'netperf' test on
> > > > NVidia's grace-grace and grace-hopper machines. The 'netperf'
> > > > client is started in the VM hosted by grace-hopper machine,
> > > > while the 'netperf' server is running on grace-grace machine.
> > > > 
> > > > The VM is started with virtio-net and vhost has been enabled.
> > > > We observe a error message spew from VM and then soft-lockup
> > > > report. The error message indicates the data associated with
> > > > the descriptor (index: 135) has been released, and the queue
> > > > is marked as broken. It eventually leads to the endless effort
> > > > to fetch free buffer (skb) in drivers/net/virtio_net.c::start_xmit()
> > > > and soft-lockup. The stale index 135 is fetched from the available
> > > > ring and published to the used ring by vhost, meaning we have
> > > > disordred write to the available ring element and available index.
> > > > 
> > > >    /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64              \
> > > >    -accel kvm -machine virt,gic-version=host                            \
> > > >       :                                                                 \
> > > >    -netdev tap,id=vnet0,vhost=on                                        \
> > > >    -device virtio-net-pci,bus=pcie.8,netdev=vnet0,mac=52:54:00:f1:26:b0 \
> > > > 
> > > >    [   19.993158] virtio_net virtio1: output.0:id 135 is not a head!
> > > > 
> > > > Fix the issue by replacing virtio_wmb(vq->weak_barriers) with stronger
> > > > virtio_mb(false), equivalent to replaced 'dmb' by 'dsb' instruction on
> > > > ARM64. It should work for other architectures, but performance loss is
> > > > expected.
> > > > 
> > > > Cc: stable@xxxxxxxxxxxxxxx
> > > > Reported-by: Yihuang Yu <yihyu@xxxxxxxxxx>
> > > > Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx>
> > > > ---
> > > >   drivers/virtio/virtio_ring.c | 12 +++++++++---
> > > >   1 file changed, 9 insertions(+), 3 deletions(-)
> > > > 
> > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > > index 49299b1f9ec7..7d852811c912 100644
> > > > --- a/drivers/virtio/virtio_ring.c
> > > > +++ b/drivers/virtio/virtio_ring.c
> > > > @@ -687,9 +687,15 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> > > >   	avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> > > >   	vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
> > > > -	/* Descriptors and available array need to be set before we expose the
> > > > -	 * new available array entries. */
> > > > -	virtio_wmb(vq->weak_barriers);
> > > > +	/*
> > > > +	 * Descriptors and available array need to be set before we expose
> > > > +	 * the new available array entries. virtio_wmb() should be enough
> > > > +	 * to ensuere the order theoretically. However, a stronger barrier
> > > > +	 * is needed by ARM64. Otherwise, the stale data can be observed
> > > > +	 * by the host (vhost). A stronger barrier should work for other
> > > > +	 * architectures, but performance loss is expected.
> > > > +	 */
> > > > +	virtio_mb(false);
> > > >   	vq->split.avail_idx_shadow++;
> > > >   	vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > > >   						vq->split.avail_idx_shadow);
> > > 
> > > Replacing a DMB with a DSB is _very_ unlikely to be the correct solution
> > > here, especially when ordering accesses to coherent memory.
> > > 
> > > In practice, either the larger timing different from the DSB or the fact
> > > that you're going from a Store->Store barrier to a full barrier is what
> > > makes things "work" for you. Have you tried, for example, a DMB SY
> > > (e.g. via __smb_mb()).
> > > 
> > > We definitely shouldn't take changes like this without a proper
> > > explanation of what is going on.
> > > 
> > 
> > Thanks for your comments, Will.
> > 
> > Yes, DMB should work for us. However, it seems this instruction has issues on
> > NVidia's grace-hopper. It's hard for me to understand how DMB and DSB works
> > from hardware level. I agree it's not the solution to replace DMB with DSB
> > before we fully understand the root cause.
> > 
> > I tried the possible replacement like below. __smp_mb() can avoid the issue like
> > __mb() does. __ndelay(10) can avoid the issue, but __ndelay(9) doesn't.
> > 
> > static inline int virtqueue_add_split(struct virtqueue *_vq, ...)
> > {
> >     :
> >         /* Put entry in available array (but don't update avail->idx until they
> >          * do sync). */
> >         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> >         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
> > 
> >         /* Descriptors and available array need to be set before we expose the
> >          * new available array entries. */
> >         // Broken: virtio_wmb(vq->weak_barriers);
> >         // Broken: __dma_mb();
> >         // Work:   __mb();
> >         // Work:   __smp_mb();

Did you try __smp_wmb ? And wmb?

> >         // Work:   __ndelay(100);
> >         // Work:   __ndelay(10);
> >         // Broken: __ndelay(9);
> > 
> >        vq->split.avail_idx_shadow++;
> >         vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> >                                                 vq->split.avail_idx_shadow);
> 
> What if you stick __ndelay here?

And keep virtio_wmb above?

> 
> >         vq->num_added++;
> > 
> >         pr_debug("Added buffer head %i to %p\n", head, vq);
> >         END_USE(vq);
> >         :
> > }
> > 
> > I also tried to measure the consumed time for various barrier-relative instructions using
> > ktime_get_ns() which should have consumed most of the time. __smb_mb() is slower than
> > __smp_wmb() but faster than __mb()
> > 
> >     Instruction           Range of used time in ns
> >     ----------------------------------------------
> >     __smp_wmb()           [32  1128032]
> >     __smp_mb()            [32  1160096]
> >     __mb()                [32  1162496]
> > 
> > Thanks,
> > Gavin





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux