Hi, On 5/8/21 2:30 AM, Liming Sun wrote: > The virtio framework uses wmb() when updating avail->idx. It > guarantees the write order, but not necessarily loading order > for the code accessing the memory. This commit adds a load barrier > after reading the avail->idx to make sure all the data in the > descriptor is visible. It also adds a barrier when returning the > packet to virtio framework to make sure read/writes are visible to > the virtio code. > > Fixes: 1357dfd7261f ("platform/mellanox: Add TmFifo driver for Mellanox BlueField Soc") > Signed-off-by: Liming Sun <limings@xxxxxxxxxx> I'm not familiar enough with this / the virtio code to be able to judge if this makes sense (I assume it does). Can I get an Ack or Reviewed-by from one of the other Mellanox folks please? Regards, Hans > --- > v1->v2: > Updates for Vadim's comments: > - Add the 'Fixes' field in the commit message. > v1: Initial version > --- > drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c > index bbc4e71..38800e8 100644 > --- a/drivers/platform/mellanox/mlxbf-tmfifo.c > +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c > @@ -294,6 +294,9 @@ static irqreturn_t mlxbf_tmfifo_irq_handler(int irq, void *arg) > if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx)) > return NULL; > > + /* Make sure 'avail->idx' is visible already. */ > + virtio_rmb(false); > + > idx = vring->next_avail % vr->num; > head = virtio16_to_cpu(vdev, vr->avail->ring[idx]); > if (WARN_ON(head >= vr->num)) > @@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring, > * done or not. Add a memory barrier here to make sure the update above > * completes before updating the idx. > */ > - mb(); > + virtio_mb(false); > vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1); > } > > @@ -733,6 +736,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring, > desc = NULL; > fifo->vring[is_rx] = NULL; > > + /* > + * Make sure the load/store are in order before > + * returning back to virtio. > + */ > + virtio_mb(false); > + > /* Notify upper layer that packet is done. */ > spin_lock_irqsave(&fifo->spin_lock[is_rx], flags); > vring_interrupt(0, vring->vq); >