Re: mdev live migration support with vfio-mdev-pci

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/9/9 下午7:41, Liu, Yi L wrote:
Hi Alex,

Recently, we had an internal discussion on mdev live migration support
for SR-IOV. The usage is to wrap VF as mdev and make it migrate-able
when passthru to VMs. It is very alike with the vfio-mdev-pci sample
driver work which also wraps PF/VF as mdev. But there is gap. Current
vfio-mdev-pci driver is a generic driver which has no ability to support
customized regions. e.g. state save/restore or dirty page region which is
important in live migration. To support the usage, there are two directions:

1) extend vfio-mdev-pci driver to expose interface, let vendor specific
in-kernel module (not driver) to register some ops for live migration.
Thus to support customized regions. In this direction, vfio-mdev-pci
driver will be in charge of the hardware. The in-kernel vendor specific
module is just to provide customized region emulation.
- Pros: it will be helpful if we want to expose some user-space ABI in
         future since it is a generic driver.
- Cons: no apparent cons per me, may keep me honest, my folks.

2) further abstract out the generic parts in vfio-mdev-driver to be a library
and let vendor driver to call the interfaces exposed by this library. e.g.
provides APIs to wrap a VF as mdev and make a non-singleton iommu
group to be vfio viable when a vendor driver wants to wrap a VF as a
mdev. In this direction, device driver still in charge of hardware.
- Pros: devices driver still owns the device, which looks to be more
         "reasonable".
- Cons: no apparent cons, may be unable to have unified user space ABI if
         it's needed in future.

Any thoughts on the above usage and the two directions? Also, Kevin, Yan,
Shaopeng could keep me honest if anything missed.

Best Wishes,
Yi Liu


Actually, we had option 3:

3) High level abstraction of the device instead of a bus specific one (e.g PCI). For hardware that can do virtio on its datapath, we want to go this way. This means, we won't expose a pci device for userspace, instead, we will expose a vhost device for userspace which already had API for e.g dirty page logging and vring state set/get etc.

Thanks




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux