Re: [PATCH] VFIO/VMBUS: Add VFIO VMBUS driver support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 20 Nov 2019 13:31:47 -0700
Alex Williamson <alex.williamson@xxxxxxxxxx> wrote:

> On Wed, 20 Nov 2019 11:46:11 -0800
> Stephen Hemminger <stephen@xxxxxxxxxxxxxxxxxx> wrote:
> 
> > On Wed, 20 Nov 2019 12:07:15 -0700
> > Alex Williamson <alex.williamson@xxxxxxxxxx> wrote:
> >   
> > > On Wed, 20 Nov 2019 10:35:03 -0800
> > > Stephen Hemminger <stephen@xxxxxxxxxxxxxxxxxx> wrote:
> > >     
> > > > On Tue, 19 Nov 2019 15:56:20 -0800
> > > > "Alex Williamson" <alex.williamson@xxxxxxxxxx> wrote:
> > > >       
> > > > > On Mon, 11 Nov 2019 16:45:07 +0800
> > > > > lantianyu1986@xxxxxxxxx wrote:
> > > > >         
> > > > > > From: Tianyu Lan <Tianyu.Lan@xxxxxxxxxxxxx>
> > > > > > 
> > > > > > This patch is to add VFIO VMBUS driver support in order to expose
> > > > > > VMBUS devices to user space drivers(Reference Hyper-V UIO driver).
> > > > > > DPDK now has netvsc PMD driver support and it may get VMBUS resources
> > > > > > via VFIO interface with new driver support.
> > > > > > 
> > > > > > So far, Hyper-V doesn't provide virtual IOMMU support and so this
> > > > > > driver needs to be used with VFIO noiommu mode.          
> > > > > 
> > > > > Let's be clear here, vfio no-iommu mode taints the kernel and was a
> > > > > compromise that we can re-use vfio-pci in its entirety, so it had a
> > > > > high code reuse value for minimal code and maintenance investment.  It
> > > > > was certainly not intended to provoke new drivers that rely on this mode
> > > > > of operation.  In fact, no-iommu should be discouraged as it provides
> > > > > absolutely no isolation.  I'd therefore ask, why should this be in the
> > > > > kernel versus any other unsupportable out of tree driver?  It appears
> > > > > almost entirely self contained.  Thanks,
> > > > > 
> > > > > Alex        
> > > > 
> > > > The current VMBUS access from userspace is from uio_hv_generic
> > > > there is (and will not be) any out of tree driver for this.      
> > > 
> > > I'm talking about the driver proposed here.  It can only be used in a
> > > mode that taints the kernel that its running on, so why would we sign
> > > up to support 400 lines of code that has no safe way to use it?
> > >      
> > > > The new driver from Tianyu is to make VMBUS behave like PCI.
> > > > This simplifies the code for DPDK and other usermode device drivers
> > > > because it can use the same API's for VMBus as is done for PCI.      
> > > 
> > > But this doesn't re-use the vfio-pci API at all, it explicitly defines
> > > a new vfio-vmbus API over the vfio interfaces.  So a user mode driver
> > > might be able to reuse some vfio support, but I don't see how this has
> > > anything to do with PCI.
> > >     
> > > > Unfortunately, since Hyper-V does not support virtual IOMMU yet,
> > > > the only usage modle is with no-iommu taint.      
> > > 
> > > Which is what makes it unsupportable and prompts the question why it
> > > should be included in the mainline kernel as it introduces a
> > > maintenance burden and normalizes a usage model that's unsafe.  Thanks,    
> > 
> > Many existing userspace drivers are unsafe:
> >   - out of tree DPDK igb_uio is unsafe.

> Why is it out of tree?

Agree, it really shouldn't be. The original developers hoped that
VFIO and VFIO-noiommu would replace it. But since DPDK has to run
on ancient distro's and other non VFIO hardware it still lives.

Because it is not suitable for merging for many reasons.
Mostly because it allows MSI and other don't want that.
 
> 
> 
> >   - VFIO with noiommu is unsafe.  
> 
> Which taints the kernel and requires raw I/O user privs.
> 
> >   - hv_uio_generic is unsafe.  
> 
> Gosh, it's pretty coy about this, no kernel tainting, no user
> capability tests, no scary dmesg or Kconfig warnings.  Do users know
> it's unsafe?

It should taint in same way as VFIO with noiommu.
Yes it is documented as unsafe (but not in kernel source).
It really has same unsafeness as uio_pci_generic, and there is not warnings
around that.

> 
> > This new driver is not any better or worse. This sounds like a complete
> > repeat of the discussion that occurred before introducing VFIO noiommu mode.
> > 
> > Shouldn't vmbus vfio taint the kernel in the same way as vfio noiommu does?  
> 
> Yes, the no-iommu interaction happens at the vfio-core level.  I can't
> speak for any of the uio interfaces you mention, but I know that
> uio_pci_generic is explicitly intended for non-DMA use cases and in
> fact the efforts to enable MSI/X support in that driver and the
> objections raised for breaking that usage model by the maintainer, is
> what triggered no-iommu support for vfio.  IIRC, the rationale was
> largely for code reuse both at the kernel and userspace driver level,
> while imposing a minimal burden in vfio-core for this dummy iommu
> driver.  vfio explicitly does not provide a DMA mapping solution for
> no-iommu use cases because I'm not willing to maintain any more lines
> of code to support this usage model.  The tainting imposed by this model
> and incomplete API was intended to be a big warning to discourage its
> use and as features like vIOMMU become more prevalent and bare metal
> platforms without physical IOMMUs hopefully become less prevalent,
> maybe no-iommu could be phased out or removed.

Doing vIOMMU at scale with a non-Linux host, take a a long time.
Tainting doesn't make it happen any sooner. It just makes users
live harder. Sorry blaming the user and giving a bad experience doesn't help anyone.

> You might consider this a re-hashing of those previous discussions, but
> to me it seems like taking advantage of and promoting an interface that
> should have plenty of warning signs that this is not a safe way to use
> the device from userspace.  Without some way to take advantage of the
> code in a safe way, this just seems to be normalizing an unsupportable
> usage model.  Thanks,


The use case for all this stuff has been dedicated infrastructure.
It would be good if security was more baked in but it isn't.
Most users cover it over by either being dedicated applicances
or use LSM to protect UIO.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux