Re: VFIO based vGPU(was Re: [Announcement] 2015-Q3 release of XenGT - a Mediated ...)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/19/2016 03:05 AM, Alex Williamson wrote:
> On Mon, 2016-01-18 at 16:56 +0800, Jike Song wrote:
>>
>> Would you elaborate a bit about 'iommu backends' here? Previously
>> I thought that entire type1 will be duplicated. If not, what is supposed
>> to add, a new vfio_dma_do_map?
> 
> I don't know that you necessarily want to re-use any of the
> vfio_iommu_type1.c code as-is, it's just the API that we'll want to
> keep consistent so QEMU doesn't need to learn about a new iommu
> backend.  Opportunities for sharing certainly may arise, you may want
> to use a similar red-black tree for storing current mappings, the
> pinning code may be similar, etc.  We can evaluate on a case by case
> basis whether it makes sense to pull out common code for each of those.

It would be great if you can help abstracting it :) 

> 
> As for an iommu backend in general, if you look at the code flow
> example in Documentation/vfio.txt, the user opens a container
> (/dev/vfio/vfio) and a group (/dev/vfio/$GROUPNUM).  The group is set
> to associate with a container instance via VFIO_GROUP_SET_CONTAINER and
> then an iommu model is set for the container with VFIO_SET_IOMMU.
>  Looking at drivers/vfio/vfio.c:vfio_ioctl_set_iommu(), we look for an
> iommu backend that supports the requested extension (VFIO_TYPE1_IOMMU),
> call the open() callback on it and then attempt to attach the group via
> the attach_group() callback.  At this latter callback, the iommu
> backend can compare the device to those that it actually supports.  For
> instance the existing vfio_iommu_type1 will attempt to use the IOMMU
> API and should fail if the device cannot be supported with that.  The
> current loop in vfio_ioctl_set_iommu() will exit in this case, but as
> you can see in the code, it's easy to make it continue and look for
> another iommu backend that supports the requested extension.
> 

Got it, sure type1 API w/ userspace should be kept, while a new backend
being used for vgpu.

>>> The benefit here is that QEMU could work
>>> unmodified, using the type1 vfio-iommu API regardless of whether a
>>> device is directly assigned or virtual.
>>>
>>> Let's look at the type1 interface; we have simple map and unmap
>>> interfaces which map and unmap process virtual address space (vaddr) to
>>> the device address space (iova).  The host physical address is obtained
>>> by pinning the vaddr.  In the current implementation, a map operation
>>> pins pages and populates the hardware iommu.  A vgpu compatible
>>> implementation might simply register the translation into a kernel-
>>> based database to be called upon later.  When the host graphics driver
>>> needs to enable dma for the vgpu, it doesn't need to go to QEMU for the
>>> translation, it already possesses the iova to vaddr mapping, which
>>> becomes iova to hpa after a pinning operation.
>>>
>>> So, I would encourage you to look at creating a vgpu vfio iommu
>>> backened that makes use of the type1 api since it will reduce the
>>> changes necessary for userspace.
>>>
>>
>> BTW, that should be done in the 'bus' driver, right?
> 
> I think you have some flexibility between the graphics driver and the
> vfio-vgpu driver in where this is done.  If we want vfio-vgpu to be
> more generic, then vgpu device creation and management should probably
> be done in the graphics driver and vfio-vgpu should be able to probe
> that device and call back into the graphics driver to handle requests.
> If it turns out there's not much for vfio-vgpu to share, ie. it's just
> a passthrough for device specific emulation, then maybe we want a vfio-
> intel-vgpu instead.
>

Good to know that.

>>
>> Looks that things get more clear overall, with small exceptions.
>> Thanks for the advice:)
> 
> Yes, please let me know how I can help.  Thanks,
> 
> Alex
> 

I will start the coding soon, will do :)

--
Thanks,
Jike
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux