Re: [PATCH v2 14/15] vfio: Document dual stage control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 17, 2020 at 06:27:27AM +0000, Liu, Yi L wrote:
> > From: Stefan Hajnoczi <stefanha@xxxxxxxxx>
> > Sent: Monday, June 15, 2020 5:41 PM
> > On Thu, Jun 11, 2020 at 05:15:33AM -0700, Liu Yi L wrote:
> >
> > > From: Eric Auger <eric.auger@xxxxxxxxxx>
> > >
> > > The VFIO API was enhanced to support nested stage control: a bunch of
> > > new iotcls and usage guideline.
> > >
> > > Let's document the process to follow to set up nested mode.
> > >
> > > Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
> > > CC: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx>
> > > Cc: Alex Williamson <alex.williamson@xxxxxxxxxx>
> > > Cc: Eric Auger <eric.auger@xxxxxxxxxx>
> > > Cc: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx>
> > > Cc: Joerg Roedel <joro@xxxxxxxxxx>
> > > Cc: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
> > > Signed-off-by: Eric Auger <eric.auger@xxxxxxxxxx>
> > > Signed-off-by: Liu Yi L <yi.l.liu@xxxxxxxxx>
> > > ---
> > > v1 -> v2:
> > > *) new in v2, compared with Eric's original version, pasid table bind
> > >    and fault reporting is removed as this series doesn't cover them.
> > >    Original version from Eric.
> > >    https://lkml.org/lkml/2020/3/20/700
> > >
> > >  Documentation/driver-api/vfio.rst | 64
> > > +++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 64 insertions(+)
> > >
> > > diff --git a/Documentation/driver-api/vfio.rst
> > > b/Documentation/driver-api/vfio.rst
> > > index f1a4d3c..06224bd 100644
> > > --- a/Documentation/driver-api/vfio.rst
> > > +++ b/Documentation/driver-api/vfio.rst
> > > @@ -239,6 +239,70 @@ group and can access them as follows::
> > >  	/* Gratuitous device reset and go... */
> > >  	ioctl(device, VFIO_DEVICE_RESET);
> > >
> > > +IOMMU Dual Stage Control
> > > +------------------------
> > > +
> > > +Some IOMMUs support 2 stages/levels of translation. Stage corresponds
> > > +to the ARM terminology while level corresponds to Intel's VTD terminology.
> > > +In the following text we use either without distinction.
> > > +
> > > +This is useful when the guest is exposed with a virtual IOMMU and
> > > +some devices are assigned to the guest through VFIO. Then the guest
> > > +OS can use stage 1 (GIOVA -> GPA or GVA->GPA), while the hypervisor
> > > +uses stage 2 for VM isolation (GPA -> HPA).
> > > +
> > > +Under dual stage translation, the guest gets ownership of the stage 1
> > > +page tables and also owns stage 1 configuration structures. The
> > > +hypervisor owns the root configuration structure (for security
> > > +reason), including stage 2 configuration. This works as long
> > > +configuration structures and page table
> > 
> > s/as long configuration/as long as configuration/
> 
> got it.
> 
> > 
> > > +format are compatible between the virtual IOMMU and the physical IOMMU.
> > 
> > s/format/formats/
> 
> I see.
> 
> > > +
> > > +Assuming the HW supports it, this nested mode is selected by choosing
> > > +the VFIO_TYPE1_NESTING_IOMMU type through:
> > > +
> > > +    ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
> > > +
> > > +This forces the hypervisor to use the stage 2, leaving stage 1
> > > +available for guest usage. The guest stage 1 format depends on IOMMU
> > > +vendor, and it is the same with the nesting configuration method.
> > > +User space should check the format and configuration method after
> > > +setting nesting type by
> > > +using:
> > > +
> > > +    ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
> > > +
> > > +Details can be found in Documentation/userspace-api/iommu.rst. For
> > > +Intel VT-d, each stage 1 page table is bound to host by:
> > > +
> > > +    nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
> > > +    memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
> > > +    ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
> > > +
> > > +As mentioned above, guest OS may use stage 1 for GIOVA->GPA or GVA->GPA.
> > > +GVA->GPA page tables are available when PASID (Process Address Space
> > > +GVA->ID)
> > > +is exposed to guest. e.g. guest with PASID-capable devices assigned.
> > > +For such page table binding, the bind_data should include PASID info,
> > > +which is allocated by guest itself or by host. This depends on
> > > +hardware vendor e.g. Intel VT-d requires to allocate PASID from host.
> > > +This requirement is available by VFIO_IOMMU_GET_INFO. User space
> > > +could allocate PASID from host by:
> > > +
> > > +    req.flags = VFIO_IOMMU_ALLOC_PASID;
> > > +    ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
> > 
> > It is not clear how the userspace application determines whether PASIDs must be
> > allocated from the host via VFIO_IOMMU_PASID_REQUEST or if the guest itself can
> > allocate PASIDs. The text mentions VFIO_IOMMU_GET_INFO but what exactly
> > should the userspace application check?
> 
> For VT-d, spec 3.0 introduced Virtual Cmd interface for PASID allocation,
> guest request PASID from host if it detects the interface. Application
> should check the IOMMU_NESTING_FEAT_SYSWIDE_PASID setting in the below
> info reported by VFIO_IOMMU_GET_INFO. And virtual VT-d should not report
> SVA related capabilities to guest if  SYSWIDE_PASID is not supported by
> kernel.
> 
> +struct iommu_nesting_info {
> +	__u32	size;
> +	__u32	format;
> +	__u32	features;
> +#define IOMMU_NESTING_FEAT_SYSWIDE_PASID	(1 << 0)
> +#define IOMMU_NESTING_FEAT_BIND_PGTBL		(1 << 1)
> +#define IOMMU_NESTING_FEAT_CACHE_INVLD		(1 << 2)
> +	__u32	flags;
> +	__u8	data[];
> +};
> https://lore.kernel.org/linux-iommu/1591877734-66527-3-git-send-email-yi.l.liu@xxxxxxxxx/

I see. Is it possible to add this information into this patch or at
least a reference so readers know where to find out exactly how to do
this?

Stefan

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux