Re: [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 11 Mar 2016 04:46:23 +0000
"Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

> > From: Neo Jia [mailto:cjia@xxxxxxxxxx]
> > Sent: Friday, March 11, 2016 12:20 PM
> > 
> > On Thu, Mar 10, 2016 at 11:10:10AM +0800, Jike Song wrote:  
> > >  
> > > >> Is it supposed to be the caller who should set
> > > >> up IOMMU by DMA api such as dma_map_page(), after calling
> > > >> vgpu_dma_do_translate()?
> > > >>  
> > > >
> > > > Don't think you need to call dma_map_page here. Once you have the pfn available
> > > > to your GPU kernel driver, you can just go ahead to setup the mapping as you
> > > > normally do such as calling pci_map_sg and its friends.
> > > >  
> > >
> > > Technically it's definitely OK to call DMA API from the caller rather than here,
> > > however personally I think it is a bit counter-intuitive: IOMMU page tables
> > > should be constructed within the VFIO IOMMU driver.
> > >  
> > 
> > Hi Jike,
> > 
> > For vGPU, what we have is just a virtual device and a fake IOMMU group, therefore
> > the actual interaction with the real GPU should be managed by the GPU vendor driver.
> >   
> 
> Hi, Neo,
> 
> Seems we have a different thought on this. Regardless of whether it's a virtual/physical 
> device, imo, VFIO should manage IOMMU configuration. The only difference is:
> 
> - for physical device, VFIO directly invokes IOMMU API to set IOMMU entry (GPA->HPA);
> - for virtual device, VFIO invokes kernel DMA APIs which indirectly lead to IOMMU entry 
> set if CONFIG_IOMMU is enabled in kernel (GPA->IOVA);
> 
> This would provide an unified way to manage the translation in VFIO, and then vendor
> specific driver only needs to query and use returned IOVA corresponding to a GPA. 
> 
> Doing so has another benefit, to make underlying vGPU driver VMM agnostic. For KVM,
> yes we can use pci_map_sg. However for Xen it's different (today Dom0 doesn't see
> IOMMU. In the future there'll be a PVIOMMU implementation) so different code path is 
> required. It's better to abstract such specific knowledge out of vGPU driver, which just
> uses whatever dma_addr returned by other agent (VFIO here, or another Xen specific
> agent) in a centralized way.
> 
> Alex, what's your opinion on this?

The sticky point is how vfio, which is only handling the vGPU, has a
reference to the physical GPU on which to call DMA API operations.  If
that reference is provided by the vendor vGPU driver, for example
vgpu_dma_do_translate_for_pci(gpa, pci_dev), I don't see any reason to
be opposed to such an API.  I would not condone vfio deriving or owning
a reference to the physical device on its own though, that's in the
realm of the vendor vGPU driver.  It does seem a bit cleaner and should
reduce duplicate code if the vfio vGPU iommu interface could handle the
iommu mapping for the vendor vgpu driver when necessary.  Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux