Re: To extend the feature of vfio-mdev

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 19, 2017 at 12:56:04PM -0600, Alex Williamson wrote:
> Date: Thu, 19 Oct 2017 12:56:04 -0600
> From: Alex Williamson <alex.williamson@xxxxxxxxxx>
> To: Kenneth Lee <liguozhu@xxxxxxxxxxxxx>
> CC: Jon Masters <jcm@xxxxxxxxxxxxxx>, Jon Masters <jcm@xxxxxxxxxx>,
>  Jonathan Cameron <jonathan.cameron@xxxxxxxxxx>, liubo95@xxxxxxxxxx,
>  xuzaibo@xxxxxxxxxx
> Subject: Re: To extend the feature of vfio-mdev
> Message-ID: <20171019125604.26577eda@xxxxxxxxxx>
> 
> 
> Hi Kenneth,
> 
> On Thu, 19 Oct 2017 12:13:46 +0800
> Kenneth Lee <liguozhu@xxxxxxxxxxxxx> wrote:
> 
> > Dear Alex,
> > 
> > I hope this mail finding you well. This is to discuss the possibility to
> > extend the vfio-mdev feature to form a general accelerator framework for
> > Linux. I name the framework as "WrapDrive".
> > 
> > I made a presentation on Linaro Connect SFO17 (ref: 
> > http://connect.linaro.org/resource/sfo17/sfo17-317/), and discussed it
> > with Jon Master. He said he can connect us for further cooperation.
> > 
> > The idea of WrapDrive is to create a mdev for every user application so
> > they can share the same PF or VF facility. This is important to
> > accelerators, because we cannot create a VF for every process in most
> > cases.
> > 
> > WrapDrive need to add the following feature upon vfio and vfio-mdev
> > 
> > 1. Set unified abi in the sysfs so the same type of
> >    accelerator/algorithm can be managed from the user space
> 
> We already have a defined, standard mdev interface where vendor drivers
> can add additional attributes.  If warpdrive is a wrapper around
> vfio-mdev, can't it define standard attributes w/o vfio changes?

Yes. We just define necessary attributes so the application with same
requirements can take it as a whole.

> 
> > 2. Let the mdev use the parent dev's iommu facility
> 
> What prevents you from doing this now?  The mdev vendor driver is
> entirely responsible for managing the DMA of each mdev device.  Mdev
> vGPUs use the GTT of the parent device to do this today, vfio only
> tracks user mappings and provides pinned pages to the vendor driver on
> request.  IOW, this sounds like something within the scope of the
> vendor driver, not the vfio-mdev core.

I'm sorry I don't know much how i915 work. But according to the implementation
of vfio_iommu_type1_attach_group, the mdev's iommu_group is added to the
external_domain list. But vfio_iommu_map() iommu_map() only the domain list.

Therefore, if ioctl(VFIO_IOMMU_MAP_DMA) to the mdev's iommu_group, it won't do
anything. What is mdev vendor driver expected to do? Should it register to the
notification chain or adopted another interface to do so? Is this intended by
the mdev driver? I think it may be necessary to provide some standard way by
default.

> 
> > 3. Let iommu driver accept more than one iommu_domain for the same
> >    device. The substream id or pasid should be support for that
> 
> You're really extending the definition of an iommu_domain to include
> PASID to do this, I don't think it makes sense in the general case.  So
> perhaps you're talking about a PASID management layer sitting on top of
> an iommu_domain.  AIUI for PCIe, a device has a requester ID which is
> used to find the context entry for that device.  The IOMMU may support
> PASID, which would cause a first level lookup via those set of page
> tables, or it might only support second level translation.  The
> iommu_domain is a reflection of that initial, single requester ID.

Maybe I misunderstand this. But the IOMMU hardware, such as SMMU for ARM,
support multiple page table and is referred by something like ASID. If we should
support it in Linux, iommu_domain should be the best choice (no matter you call
it cookie or id or something else). Or where you can get a object referring to it?

> 
> > 4. Support SVM in vfio and iommu
> 
> There are numerous discussions about this ongoing.

Yes. I just said we needed the support.

> 
> > We have some PoC code here:
> > https://github.com/Kenneth-Lee/linux-kernel-wrapdrive
> > with doc in Documentation/wrapdrive. We are currently keep the code with
> > our crypt drive.
> > 
> > But we hope it can be used broadly, Do you think we can add the module
> > in vfio subsystem?
> 
> I think what you're describing is mostly a wrapper around the existing
> vfio-mdev model, I don't think it's necessarily part of the vfio
> subsystem.  As SVM support is added to vfio, I expect we'll have new
> ioctls for things such as binding the PASID table to a container and
> vfio-mdev would need to be extended to support that, allowing the
> vendor driver to apply that PASID table to the iommu_domain of the host
> device.  Is "warpdrive_k" effectively a shim layer for accelerator type
> devices to make use of vfio-mdev in a more common way and sharing more
> code than the existing vGPU related mdev drivers?  Thanks,
> 

Yes, we can also put it into drivers/misc. But we think we create a heavy
dependence on mdev. So we want to know your points. Thanks.

> Alex
> 
> PS, why aren't we at least copying the new linux-acclerators list on
> this topic after Jonathan went to the trouble of stoking community
> interest? 

Yes. I was just afraid that you would have other concern as a initial mail. So
now I cc to the list from this mail;)

-- 
			-Kenneth(Hisilicon)



[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux