On Wed, Apr 07, 2021 at 01:16:12PM +0300, Eli Cohen wrote: > On Wed, Apr 07, 2021 at 03:25:00PM +0800, Jason Wang wrote: > > > > 在 2021/4/7 上午1:04, Parav Pandit 写道: > > > From: Eli Cohen <elic@xxxxxxxxxx> > > > > > > When we suspend the VM, the VDPA interface will be reset. When the VM is > > > resumed again, clear_virtqueues() will clear the available and used > > > indices resulting in hardware virqtqueue objects becoming out of sync. > > > We can avoid this function alltogether since qemu will clear them if > > > required, e.g. when the VM went through a reboot. > > > > > > Moreover, since the hw available and used indices should always be > > > identical on query and should be restored to the same value same value > > > for virtqueues that complete in order, we set the single value provided > > > by set_vq_state(). In get_vq_state() we return the value of hardware > > > used index. > > > > > > Fixes: b35ccebe3ef7 ("vdpa/mlx5: Restore the hardware used index after change map") > > > Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices") > > > Signed-off-by: Eli Cohen <elic@xxxxxxxxxx> > > > --- > > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 17 ++++------------- > > > 1 file changed, 4 insertions(+), 13 deletions(-) > > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > index 56d463d2be85..a6e6d44b9ca5 100644 > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > @@ -1170,6 +1170,7 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m > > > return; > > > } > > > mvq->avail_idx = attr.available_index; > > > + mvq->used_idx = attr.used_index; > > > } > > > static void suspend_vqs(struct mlx5_vdpa_net *ndev) > > > @@ -1466,6 +1467,7 @@ static int mlx5_vdpa_set_vq_state(struct vdpa_device *vdev, u16 idx, > > > return -EINVAL; > > > } > > > + mvq->used_idx = state->avail_index; > > > mvq->avail_idx = state->avail_index; > > > return 0; > > > } > > > @@ -1483,7 +1485,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa > > > * that cares about emulating the index after vq is stopped. > > > */ > > > if (!mvq->initialized) { > > > - state->avail_index = mvq->avail_idx; > > > + state->avail_index = mvq->used_idx; > > > > > > Even if the hardware avail idx is always equal to used idx. I would still > > keep using the avail_idx, this makes it easier to be reviewed since it is > > consistent to e.g kernel vhost bakcend implementations. (The last_avail_idx > > in vhost_virtqueue). > > > > The problem is that there is a bug in the firmware such that for RX > virtqueues firmware returns a wrong value in the avail_idx. The correct > value is reported in used_idx. That's why we need to take the value from > used_idx. Maybe add a code comment here so people can figure it out? > > Thanks > > > > > > > return 0; > > > } > > > @@ -1492,7 +1494,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa > > > mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n"); > > > return err; > > > } > > > - state->avail_index = attr.available_index; > > > + state->avail_index = attr.used_index; > > > return 0; > > > } > > > @@ -1572,16 +1574,6 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev) > > > } > > > } > > > -static void clear_virtqueues(struct mlx5_vdpa_net *ndev) > > > -{ > > > - int i; > > > - > > > - for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) { > > > - ndev->vqs[i].avail_idx = 0; > > > - ndev->vqs[i].used_idx = 0; > > > - } > > > -} > > > - > > > /* TODO: cross-endian support */ > > > static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev) > > > { > > > @@ -1822,7 +1814,6 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status) > > > if (!status) { > > > mlx5_vdpa_info(mvdev, "performing device reset\n"); > > > teardown_driver(ndev); > > > - clear_virtqueues(ndev); > > > mlx5_vdpa_destroy_mr(&ndev->mvdev); > > > ndev->mvdev.status = 0; > > > ++mvdev->generation; > > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization