RE: mdev live migration support with vfio-mdev-pci

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Alex Williamson
> Sent: Friday, September 13, 2019 11:54 PM
> 
> On Fri, 13 Sep 2019 00:28:25 +0000
> "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
> 
> > > From: Alex Williamson
> > > Sent: Thursday, September 12, 2019 10:41 PM
> > >
> > > On Mon, 9 Sep 2019 11:41:45 +0000
> > > "Liu, Yi L" <yi.l.liu@xxxxxxxxx> wrote:
> > >
> > > > Hi Alex,
> > > >
> > > > Recently, we had an internal discussion on mdev live migration
> support
> > > > for SR-IOV. The usage is to wrap VF as mdev and make it migrate-able
> > > > when passthru to VMs. It is very alike with the vfio-mdev-pci sample
> > > > driver work which also wraps PF/VF as mdev. But there is gap. Current
> > > > vfio-mdev-pci driver is a generic driver which has no ability to support
> > > > customized regions. e.g. state save/restore or dirty page region which
> is
> > > > important in live migration. To support the usage, there are two
> > > directions:
> > > >
> > > > 1) extend vfio-mdev-pci driver to expose interface, let vendor specific
> > > > in-kernel module (not driver) to register some ops for live migration.
> > > > Thus to support customized regions. In this direction, vfio-mdev-pci
> > > > driver will be in charge of the hardware. The in-kernel vendor specific
> > > > module is just to provide customized region emulation.
> > > > - Pros: it will be helpful if we want to expose some user-space ABI in
> > > >         future since it is a generic driver.
> > > > - Cons: no apparent cons per me, may keep me honest, my folks.
> > > >
> > > > 2) further abstract out the generic parts in vfio-mdev-driver to be a
> library
> > > > and let vendor driver to call the interfaces exposed by this library. e.g.
> > > > provides APIs to wrap a VF as mdev and make a non-singleton iommu
> > > > group to be vfio viable when a vendor driver wants to wrap a VF as a
> > > > mdev. In this direction, device driver still in charge of hardware.
> > > > - Pros: devices driver still owns the device, which looks to be more
> > > >         "reasonable".
> > > > - Cons: no apparent cons, may be unable to have unified user space
> ABI if
> > > >         it's needed in future.
> > > >
> > > > Any thoughts on the above usage and the two directions? Also, Kevin,
> Yan,
> > > > Shaopeng could keep me honest if anything missed.
> > >
> > > A concern with 1) is that we specifically made the vfio-mdev-pci driver
> > > a sample driver to avoid user confusion over when to use vfio-pci vs
> > > when to use vfio-mdev-pci.  This use case suggests vfio-mdev-pci
> > > becoming a peer of vfio-pci when really I think it was meant only as a
> > > demo of IOMMU backed mdev devices and perhaps a starting point for
> > > vendors wanting to create an mdev wrapper around real hardware.  I
> > > had assumed that in the latter case, the sample driver would be forked.
> > > Do these new suggestions indicate we're deprecating vfio-pci?  I'm not
> > > necessarily in favor of that. Couldn't we also have device specific
> > > extensions of vfio-pci that could provide migration support for a
> > > physical device?  Do we really want to add the usage burden of the
> mdev
> > > sysfs interface if we're only adding migration to a VF?  Maybe instead
> > > we should add common helpers for migration that could be used by
> either
> > > vfio-pci or vendor specific mdev drivers.  Ideally I think that if
> > > we're not trying to multiplex a device into multiple mdevs or trying
> > > to supplement a device that would be incomplete without mdev, and
> only
> > > want to enable migration for a PF/VF, we'd bind it to vfio-pci and those
> > > features would simply appear for device we've enlightened vfio-pci to
> > > migrate.  Thanks,
> > >
> >
> > That would be better and simpler. We thought you may want to keep
> > current vfio-pci intact. :-) btw do you prefer to putting device specific
> > migration logic within VFIO, or building some mechanism for PF/VF driver
> > to register and handle? The former is fully constrained with VFIO but
> > moving forward may get complex. The latter keeps the VFIO clean and
> > reuses existing driver logic thus simpler, just that PF/VF driver enters
> > a special mode in which it's not bound to the PF/VF device (vfio-pci is
> > the actual driver) and simply acts as callbacks to handler device specific
> > migration request.
> 
> I'm not sure how this native device driver registration would work, is
> seems troublesome for the user.  vfio-pci already has optional support
> for IGD extensions.  I imagine it would be similar to that, but we may
> try to modularize it within vfio-pci, perhaps similar to how Eric
> supports resets on vfio-platform.  There's also an issue with using the
> native driver that users cannot then blacklist the native driver if
> they only intend to use the device with vfio-pci.  It's probably best
> to keep it contained, but modular within vfio-pci.  Thanks,
> 

OK, let's do some experiment on this approach. Our current prototype 
is based on the other way, i.e. native driver registration, in which we 
can leverage many native driver functions/structures. Let's see how
much code will be introduced when everything is contained in VFIO.

btw just be curious about your thoughts on this - we do have several 
other usages which leverage the mdev framework on top of VF device 
which selectively mediates some of the VF resource or even faking
new resource beyond the VF. For those usages, do you prefer to having
a common helper layer in vfio to reduce duplication, or just letting 
vendor driver to do all the business themselves (same as how mdev is
used for sharing today)?

Thanks
Kevin



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux