Re: [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 11, 2016 at 10:56:24AM -0700, Alex Williamson wrote:
> On Fri, 11 Mar 2016 08:55:44 -0800
> Neo Jia <cjia@xxxxxxxxxx> wrote:
> 
> > > > Alex, what's your opinion on this?  
> > > 
> > > The sticky point is how vfio, which is only handling the vGPU, has a
> > > reference to the physical GPU on which to call DMA API operations.  If
> > > that reference is provided by the vendor vGPU driver, for example
> > > vgpu_dma_do_translate_for_pci(gpa, pci_dev), I don't see any reason to
> > > be opposed to such an API.  I would not condone vfio deriving or owning
> > > a reference to the physical device on its own though, that's in the
> > > realm of the vendor vGPU driver.  It does seem a bit cleaner and should
> > > reduce duplicate code if the vfio vGPU iommu interface could handle the
> > > iommu mapping for the vendor vgpu driver when necessary.  Thanks,  
> > 
> > Hi Alex,
> > 
> > Since we don't want to allow vfio iommu to derive or own a reference to the
> > physical device, I think it is still better not providing such pci_dev to the 
> > vfio iommu type1 driver.
> > 
> > Also, I need to point out that if the vfio iommu is going to set up iommu page
> > table for the real underlying physical device, giving the fact of single RID we
> > are all having here, the iommu mapping code has to return the new "IOVA" that is
> > mapped to the HPA, which the GPU vendro driver will have to put on its DMA
> > engine. This is very different than the current VFIO IOMMU mapping logic.
> > 
> > And we still have to provide another interface to translate the GPA to
> > HPA for CPU mapping.
> > 
> > In the current RFC, we only need to have a single interface to provide the most
> > basic information to the GPU vendor driver and without taking the risk of
> > leaking a ref to VFIO IOMMU.
> 
> I don't see this as some fundamental difference of opinion, it's really
> just whether vfio provides a "pin this GFN and return the HPA" function
> or whether that function could be extended to include "... and also map
> it through the DMA API for the provided device and return the host
> IOVA".  It might even still be a single function to vfio for CPU vs
> device mapping where the device and IOVA return pointer are NULL when
> only pinning is required for CPU access (though maybe there are better
> ways to provide CPU access than pinning).  A wrapper could even give the
> appearance that those are two separate functions.
> 
> So long as vfio isn't owning or deriving the device for the DMA API
> calls and we don't introduce some complication in page accounting, this
> really just seems like a question of whether moving the DMA API
> handling into vfio is common between the vendor vGPU drivers and are we
> reducing the overall amount and complexity of code by giving the vendor
> drivers the opportunity to do both operations with one interface.

Hi Alex,

OK, I will look into of adding such facilitation and probably include it in a
bit later rev of VGPU IOMMU if we don't run any surprise or the issues you
mentioned above.

Thanks,
Neo

> If as Kevin suggest it also provides some additional abstractions
> for Xen vs KVM, even better.  Thanks,
> 
> Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux