On Tue, Mar 19, 2024 at 04:49:50PM +1000, Gavin Shan wrote: > > On 3/19/24 16:43, Michael S. Tsirkin wrote: > > On Tue, Mar 19, 2024 at 04:38:49PM +1000, Gavin Shan wrote: > > > On 3/19/24 16:09, Michael S. Tsirkin wrote: > > > > > > > > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > > > > > > > index 49299b1f9ec7..7d852811c912 100644 > > > > > > > --- a/drivers/virtio/virtio_ring.c > > > > > > > +++ b/drivers/virtio/virtio_ring.c > > > > > > > @@ -687,9 +687,15 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > > > > > > > avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1); > > > > > > > vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > > > > > > > - /* Descriptors and available array need to be set before we expose the > > > > > > > - * new available array entries. */ > > > > > > > - virtio_wmb(vq->weak_barriers); > > > > > > > + /* > > > > > > > + * Descriptors and available array need to be set before we expose > > > > > > > + * the new available array entries. virtio_wmb() should be enough > > > > > > > + * to ensuere the order theoretically. However, a stronger barrier > > > > > > > + * is needed by ARM64. Otherwise, the stale data can be observed > > > > > > > + * by the host (vhost). A stronger barrier should work for other > > > > > > > + * architectures, but performance loss is expected. > > > > > > > + */ > > > > > > > + virtio_mb(false); > > > > > > > vq->split.avail_idx_shadow++; > > > > > > > vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, > > > > > > > vq->split.avail_idx_shadow); > > > > > > > > > > > > Replacing a DMB with a DSB is _very_ unlikely to be the correct solution > > > > > > here, especially when ordering accesses to coherent memory. > > > > > > > > > > > > In practice, either the larger timing different from the DSB or the fact > > > > > > that you're going from a Store->Store barrier to a full barrier is what > > > > > > makes things "work" for you. Have you tried, for example, a DMB SY > > > > > > (e.g. via __smb_mb()). > > > > > > > > > > > > We definitely shouldn't take changes like this without a proper > > > > > > explanation of what is going on. > > > > > > > > > > > > > > > > Thanks for your comments, Will. > > > > > > > > > > Yes, DMB should work for us. However, it seems this instruction has issues on > > > > > NVidia's grace-hopper. It's hard for me to understand how DMB and DSB works > > > > > from hardware level. I agree it's not the solution to replace DMB with DSB > > > > > before we fully understand the root cause. > > > > > > > > > > I tried the possible replacement like below. __smp_mb() can avoid the issue like > > > > > __mb() does. __ndelay(10) can avoid the issue, but __ndelay(9) doesn't. > > > > > > > > > > static inline int virtqueue_add_split(struct virtqueue *_vq, ...) > > > > > { > > > > > : > > > > > /* Put entry in available array (but don't update avail->idx until they > > > > > * do sync). */ > > > > > avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1); > > > > > vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > > > > > > > > > > /* Descriptors and available array need to be set before we expose the > > > > > * new available array entries. */ > > > > > // Broken: virtio_wmb(vq->weak_barriers); > > > > > // Broken: __dma_mb(); > > > > > // Work: __mb(); > > > > > // Work: __smp_mb(); > > > > > // Work: __ndelay(100); > > > > > // Work: __ndelay(10); > > > > > // Broken: __ndelay(9); > > > > > > > > > > vq->split.avail_idx_shadow++; > > > > > vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, > > > > > vq->split.avail_idx_shadow); > > > > > > > > What if you stick __ndelay here? > > > > > > > > > > /* Put entry in available array (but don't update avail->idx until they > > > * do sync). */ > > > avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1); > > > vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > > > > > > /* Descriptors and available array need to be set before we expose the > > > * new available array entries. */ > > > virtio_wmb(vq->weak_barriers); > > > vq->split.avail_idx_shadow++; > > > vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, > > > vq->split.avail_idx_shadow); > > > /* Try __ndelay(x) here as Michael suggested > > > * > > > * Work: __ndelay(200); possiblly make it hard to reproduce > > > * Broken: __ndelay(100); > > > * Broken: __ndelay(20); > > > * Broken: __ndelay(10); > > > */ > > > __ndelay(200); > > > > So we see that just changing the timing masks the race. > > What are you using on the host side? vhost or qemu? > > > > __ndelay(200) may make the issue harder to be reproduce as I understand. > More delays here will give vhost relief, reducing the race. > > The issue is only reproducible when vhost is turned on. Otherwise, we > aren't able to hit the issue. > > -netdev tap,id=vnet0,vhost=true,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \ > -device virtio-net-pci,bus=pcie.8,netdev=vnet0,mac=52:54:00:f1:26:b0 Given it's vhost, it's also possible that the issue is host side. I wonder what happens if we stick a delay or a stronger barrier in vhost.c - either here: /* Make sure buffer is written before we update index. */ smp_wmb(); or here: /* Only get avail ring entries after they have been * exposed by guest. */ smp_rmb(); ? > > > > > > > > > > > > vq->num_added++; > > > > > > > > > > pr_debug("Added buffer head %i to %p\n", head, vq); > > > > > END_USE(vq); > > > > > : > > > > > } > > > > > > > > > > I also tried to measure the consumed time for various barrier-relative instructions using > > > > > ktime_get_ns() which should have consumed most of the time. __smb_mb() is slower than > > > > > __smp_wmb() but faster than __mb() > > > > > > > > > > Instruction Range of used time in ns > > > > > ---------------------------------------------- > > > > > __smp_wmb() [32 1128032] > > > > > __smp_mb() [32 1160096] > > > > > __mb() [32 1162496] > > > > > > > Thanks, > Gavin