On Wed, Oct 04, 2023 at 02:56:53PM +0200, Eugenio Perez Martin wrote: > On Tue, Jul 4, 2023 at 12:16 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote: > > > > On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote: > > > > > > > > > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote: > > > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote: > > > > > Offer this backend feature as mlx5 is compatible with it. It allows it > > > > > to do live migration with CVQ, dynamically switching between passthrough > > > > > and shadow virtqueue. > > > > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@xxxxxxxxxx> > > > > Same comment. > > > to which? > > > > > > -Siwei > > > > VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it > > as a kernel/userspace ABI: what if one wants to start rings in some > > other specific order? > > As was discussed on list, a better promise is not to access ring > > until the 1st kick. vdpa can then do a kick when it wants > > the device to start accessing rings. > > > > Friendly ping about this series, > > Now that VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK has been merged for > vdpa_sim, does it make sense for mlx too? > > Thanks! For sure. I was just busy with a qemu pull, will handle this next. > > > > > > > > > --- > > > > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++ > > > > > 1 file changed, 7 insertions(+) > > > > > > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > > > index 9138ef2fb2c8..5f309a16b9dc 100644 > > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > > > > @@ -7,6 +7,7 @@ > > > > > #include <uapi/linux/virtio_net.h> > > > > > #include <uapi/linux/virtio_ids.h> > > > > > #include <uapi/linux/vdpa.h> > > > > > +#include <uapi/linux/vhost_types.h> > > > > > #include <linux/virtio_config.h> > > > > > #include <linux/auxiliary_bus.h> > > > > > #include <linux/mlx5/cq.h> > > > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev) > > > > > flush_workqueue(ndev->mvdev.wq); > > > > > } > > > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa) > > > > > +{ > > > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK); > > > > > +} > > > > > + > > > > > static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features) > > > > > { > > > > > struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); > > > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = { > > > > > .get_vq_align = mlx5_vdpa_get_vq_align, > > > > > .get_vq_group = mlx5_vdpa_get_vq_group, > > > > > .get_device_features = mlx5_vdpa_get_device_features, > > > > > + .get_backend_features = mlx5_vdpa_get_backend_features, > > > > > .set_driver_features = mlx5_vdpa_set_driver_features, > > > > > .get_driver_features = mlx5_vdpa_get_driver_features, > > > > > .set_config_cb = mlx5_vdpa_set_config_cb, > > > > > -- > > > > > 2.39.3 > > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization