On 2019/10/31 下午10:01, Tiwei Bie wrote:
This patch introduces a mdev based hardware vhost backend.
This backend is built on top of the same abstraction used
in virtio-mdev and provides a generic vhost interface for
userspace to accelerate the virtio devices in guest.
This backend is implemented as a mdev device driver on top
of the same mdev device ops used in virtio-mdev but using
a different mdev class id, and it will register the device
as a VFIO device for userspace to use. Userspace can setup
the IOMMU with the existing VFIO container/group APIs and
then get the device fd with the device name. After getting
the device fd of this device, userspace can use vhost ioctls
to setup the backend.
Signed-off-by: Tiwei Bie <tiwei.bie@xxxxxxxxx>
---
This patch depends on below series:
https://lkml.org/lkml/2019/10/30/62
v3 -> v4:
- Rebase on top of virtio-mdev series v6;
- Some minor tweaks and improvements;
v2 -> v3:
- Fix the return value (Jason);
- Don't cache unnecessary information in vhost-mdev (Jason);
- Get rid of the memset in open (Jason);
- Add comments for VHOST_SET_MEM_TABLE, ... (Jason);
- Filter out unsupported features in vhost-mdev (Jason);
- Add _GET_DEVICE_ID ioctl (Jason);
- Add _GET_CONFIG/_SET_CONFIG ioctls (Jason);
- Drop _GET_QUEUE_NUM ioctl (Jason);
- Fix the copy-paste errors in _IOW/_IOR usage;
- Some minor fixes and improvements;
v1 -> v2:
- Replace _SET_STATE with _SET_STATUS (MST);
- Check status bits at each step (MST);
- Report the max ring size and max number of queues (MST);
- Add missing MODULE_DEVICE_TABLE (Jason);
- Only support the network backend w/o multiqueue for now;
- Some minor fixes and improvements;
- Rebase on top of virtio-mdev series v4;
RFC v4 -> v1:
- Implement vhost-mdev as a mdev device driver directly and
connect it to VFIO container/group. (Jason);
- Pass ring addresses as GPAs/IOVAs in vhost-mdev to avoid
meaningless HVA->GPA translations (Jason);
RFC v3 -> RFC v4:
- Build vhost-mdev on top of the same abstraction used by
virtio-mdev (Jason);
- Introduce vhost fd and pass VFIO fd via SET_BACKEND ioctl (MST);
RFC v2 -> RFC v3:
- Reuse vhost's ioctls instead of inventing a VFIO regions/irqs
based vhost protocol on top of vfio-mdev (Jason);
RFC v1 -> RFC v2:
- Introduce a new VFIO device type to build a vhost protocol
on top of vfio-mdev;
drivers/vfio/mdev/mdev_core.c | 20 ++
drivers/vfio/mdev/mdev_private.h | 1 +
drivers/vhost/Kconfig | 12 +
drivers/vhost/Makefile | 3 +
drivers/vhost/mdev.c | 556 +++++++++++++++++++++++++++++++
include/linux/mdev.h | 5 +
include/uapi/linux/vhost.h | 18 +
include/uapi/linux/vhost_types.h | 8 +
8 files changed, 623 insertions(+)
create mode 100644 drivers/vhost/mdev.c
diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c
index 22ca589750d8..109dbac01a8f 100644
--- a/drivers/vfio/mdev/mdev_core.c
+++ b/drivers/vfio/mdev/mdev_core.c
@@ -96,6 +96,26 @@ mdev_get_virtio_ops(struct mdev_device *mdev)
}
EXPORT_SYMBOL(mdev_get_virtio_ops);
+/* Specify the vhost device ops for the mdev device, this
+ * must be called during create() callback for vhost mdev device.
+ */
+void mdev_set_vhost_ops(struct mdev_device *mdev,
+ const struct virtio_mdev_device_ops *vhost_ops)
+{
+ mdev_set_class(mdev, MDEV_CLASS_ID_VHOST);
+ mdev->vhost_ops = vhost_ops;
+}
+EXPORT_SYMBOL(mdev_set_vhost_ops);
+
+/* Get the vhost device ops for the mdev device. */
+const struct virtio_mdev_device_ops *
+mdev_get_vhost_ops(struct mdev_device *mdev)
+{
+ WARN_ON(mdev->class_id != MDEV_CLASS_ID_VHOST);
+ return mdev->vhost_ops;
+}
+EXPORT_SYMBOL(mdev_get_vhost_ops);
+
struct device *mdev_dev(struct mdev_device *mdev)
{
return &mdev->dev;
diff --git a/drivers/vfio/mdev/mdev_private.h b/drivers/vfio/mdev/mdev_private.h
index 7b47890c34e7..5597c846e52f 100644
--- a/drivers/vfio/mdev/mdev_private.h
+++ b/drivers/vfio/mdev/mdev_private.h
@@ -40,6 +40,7 @@ struct mdev_device {
union {
const struct vfio_mdev_device_ops *vfio_ops;
const struct virtio_mdev_device_ops *virtio_ops;
+ const struct virtio_mdev_device_ops *vhost_ops;
Any reason why virtio_ops is not used for vhost here?
Other looks good.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization