Re: [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 11 Mar 2016 08:55:44 -0800
Neo Jia <cjia@xxxxxxxxxx> wrote:

> On Fri, Mar 11, 2016 at 09:13:15AM -0700, Alex Williamson wrote:
> > On Fri, 11 Mar 2016 04:46:23 +0000
> > "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
> >   
> > > > From: Neo Jia [mailto:cjia@xxxxxxxxxx]
> > > > Sent: Friday, March 11, 2016 12:20 PM
> > > > 
> > > > On Thu, Mar 10, 2016 at 11:10:10AM +0800, Jike Song wrote:    
> > > > >    
> > > > > >> Is it supposed to be the caller who should set
> > > > > >> up IOMMU by DMA api such as dma_map_page(), after calling
> > > > > >> vgpu_dma_do_translate()?
> > > > > >>    
> > > > > >
> > > > > > Don't think you need to call dma_map_page here. Once you have the pfn available
> > > > > > to your GPU kernel driver, you can just go ahead to setup the mapping as you
> > > > > > normally do such as calling pci_map_sg and its friends.
> > > > > >    
> > > > >
> > > > > Technically it's definitely OK to call DMA API from the caller rather than here,
> > > > > however personally I think it is a bit counter-intuitive: IOMMU page tables
> > > > > should be constructed within the VFIO IOMMU driver.
> > > > >    
> > > > 
> > > > Hi Jike,
> > > > 
> > > > For vGPU, what we have is just a virtual device and a fake IOMMU group, therefore
> > > > the actual interaction with the real GPU should be managed by the GPU vendor driver.
> > > >     
> > > 
> > > Hi, Neo,
> > > 
> > > Seems we have a different thought on this. Regardless of whether it's a virtual/physical 
> > > device, imo, VFIO should manage IOMMU configuration. The only difference is:
> > > 
> > > - for physical device, VFIO directly invokes IOMMU API to set IOMMU entry (GPA->HPA);
> > > - for virtual device, VFIO invokes kernel DMA APIs which indirectly lead to IOMMU entry 
> > > set if CONFIG_IOMMU is enabled in kernel (GPA->IOVA);
> > > 
> > > This would provide an unified way to manage the translation in VFIO, and then vendor
> > > specific driver only needs to query and use returned IOVA corresponding to a GPA. 
> > > 
> > > Doing so has another benefit, to make underlying vGPU driver VMM agnostic. For KVM,
> > > yes we can use pci_map_sg. However for Xen it's different (today Dom0 doesn't see
> > > IOMMU. In the future there'll be a PVIOMMU implementation) so different code path is 
> > > required. It's better to abstract such specific knowledge out of vGPU driver, which just
> > > uses whatever dma_addr returned by other agent (VFIO here, or another Xen specific
> > > agent) in a centralized way.
> > > 
> > > Alex, what's your opinion on this?  
> > 
> > The sticky point is how vfio, which is only handling the vGPU, has a
> > reference to the physical GPU on which to call DMA API operations.  If
> > that reference is provided by the vendor vGPU driver, for example
> > vgpu_dma_do_translate_for_pci(gpa, pci_dev), I don't see any reason to
> > be opposed to such an API.  I would not condone vfio deriving or owning
> > a reference to the physical device on its own though, that's in the
> > realm of the vendor vGPU driver.  It does seem a bit cleaner and should
> > reduce duplicate code if the vfio vGPU iommu interface could handle the
> > iommu mapping for the vendor vgpu driver when necessary.  Thanks,  
> 
> Hi Alex,
> 
> Since we don't want to allow vfio iommu to derive or own a reference to the
> physical device, I think it is still better not providing such pci_dev to the 
> vfio iommu type1 driver.
> 
> Also, I need to point out that if the vfio iommu is going to set up iommu page
> table for the real underlying physical device, giving the fact of single RID we
> are all having here, the iommu mapping code has to return the new "IOVA" that is
> mapped to the HPA, which the GPU vendro driver will have to put on its DMA
> engine. This is very different than the current VFIO IOMMU mapping logic.
> 
> And we still have to provide another interface to translate the GPA to
> HPA for CPU mapping.
> 
> In the current RFC, we only need to have a single interface to provide the most
> basic information to the GPU vendor driver and without taking the risk of
> leaking a ref to VFIO IOMMU.

I don't see this as some fundamental difference of opinion, it's really
just whether vfio provides a "pin this GFN and return the HPA" function
or whether that function could be extended to include "... and also map
it through the DMA API for the provided device and return the host
IOVA".  It might even still be a single function to vfio for CPU vs
device mapping where the device and IOVA return pointer are NULL when
only pinning is required for CPU access (though maybe there are better
ways to provide CPU access than pinning).  A wrapper could even give the
appearance that those are two separate functions.

So long as vfio isn't owning or deriving the device for the DMA API
calls and we don't introduce some complication in page accounting, this
really just seems like a question of whether moving the DMA API
handling into vfio is common between the vendor vGPU drivers and are we
reducing the overall amount and complexity of code by giving the vendor
drivers the opportunity to do both operations with one interface.
If as Kevin suggest it also provides some additional abstractions
for Xen vs KVM, even better.  Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux