Re: [PATCH] vhost: introduce vDPA based backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/2/5 下午6:33, Michael S. Tsirkin wrote:
On Wed, Feb 05, 2020 at 09:30:14AM +0000, Shahaf Shuler wrote:
Wednesday, February 5, 2020 9:50 AM, Jason Wang:
Subject: Re: [PATCH] vhost: introduce vDPA based backend
On 2020/2/5 下午3:15, Shahaf Shuler wrote:
Wednesday, February 5, 2020 4:03 AM, Tiwei Bie:
Subject: Re: [PATCH] vhost: introduce vDPA based backend

On Tue, Feb 04, 2020 at 11:30:11AM +0800, Jason Wang wrote:
On 2020/1/31 上午11:36, Tiwei Bie wrote:
This patch introduces a vDPA based vhost backend. This backend is
built on top of the same interface defined in virtio-vDPA and
provides a generic vhost interface for userspace to accelerate the
virtio devices in guest.

This backend is implemented as a vDPA device driver on top of the
same ops used in virtio-vDPA. It will create char device entry
named vhost-vdpa/$vdpa_device_index for userspace to use.
Userspace
can use vhost ioctls on top of this char device to setup the backend.

Signed-off-by: Tiwei Bie<tiwei.bie@xxxxxxxxx>
[...]

+static long vhost_vdpa_do_dma_mapping(struct vhost_vdpa *v) {
+	/* TODO: fix this */
Before trying to do this it looks to me we need the following during
the probe

1) if set_map() is not supported by the vDPA device probe the IOMMU
that is supported by the vDPA device
2) allocate IOMMU domain

And then:

3) pin pages through GUP and do proper accounting
4) store GPA->HPA mapping in the umem
5) generate diffs of memory table and using IOMMU API to setup the
dma mapping in this method

For 1), I'm not sure parent is sufficient for to doing this or need
to introduce new API like iommu_device in mdev.
Agree. We may also need to introduce something like the iommu_device.

Would it be better for the map/umnap logic to happen inside each device ?
Devices that needs the IOMMU will call iommu APIs from inside the driver
callback.


Technically, this can work. But if it can be done by vhost-vpda it will make the
vDPA driver more compact and easier to be implemented.
Need to see the layering of such proposal but am not sure.
Vhost-vdpa is generic framework, while the DMA mapping is vendor specific.
Maybe vhost-vdpa can have some shared code needed to operate on iommu, so drivers can re-use it.  to me it seems simpler than exposing a new iommu device.

Devices that has other ways to do the DMA mapping will call the
proprietary APIs.


To confirm, do you prefer:

1) map/unmap
It is not only that. AFAIR there also flush and invalidate calls, right?

or

2) pass all maps at one time?
To me this seems more straight forward.
It is correct that under hotplug and large number of memory segments
the driver will need to understand the diff (or not and just reload
the new configuration).
However, my assumption here is that memory
hotplug is heavy flow anyway, and the driver extra cycles will not be
that visible
I think we can just allow both, after all vhost already has both interfaces ...
We just need a flag that tells userspace whether it needs to
update all maps aggressively or can wait for a fault.


It looks to me such flag is not a must and we can introduce it later when device support page fault.

Thanks






[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux