Re: [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated vIOMMU w/ nested page table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 22, 2023 at 06:15:17PM -0700, Suthikulpanit, Suravee wrote:
> Jason,
> 
> On 6/22/2023 6:46 AM, Jason Gunthorpe wrote:
> > On Wed, Jun 21, 2023 at 06:54:47PM -0500, Suravee Suthikulpanit wrote:
> > 
> > > Since the IOMMU hardware virtualizes the guest command buffer, this allows
> > > IOMMU operations to be accelerated such as invalidation of guest pages
> > > (i.e. stage1) when the command is issued by the guest kernel without
> > > intervention from the hypervisor.
> > 
> > This is similar to what we are doing on ARM as well.
> 
> Ok
> 
> > > This series is implemented on top of the IOMMUFD framework. It leverages
> > > the exisiting APIs and ioctls for providing guest iommu information
> > > (i.e. struct iommu_hw_info_amd), and allowing guest to provide guest page
> > > table information (i.e. struct iommu_hwpt_amd_v2) for setting up user
> > > domain.
> > > 
> > > Please see the [4],[5], and [6] for more detail on the AMD HW-vIOMMU.
> > > 
> > > NOTES
> > > -----
> > > This series is organized into two parts:
> > >    * Part1: Preparing IOMMU driver for HW-vIOMMU support (Patch 1-8).
> > > 
> > >    * Part2: Introducing HW-vIOMMU support (Patch 9-21).
> > > 
> > >    * Patch 12 and 21 extends the existing IOMMUFD ioctls to support
> > >      additional opterations, which can be categorized into:
> > >      - Ioctls to init/destroy AMD HW-vIOMMU instance
> > >      - Ioctls to attach/detach guest devices to the AMD HW-vIOMMU instance.
> > >      - Ioctls to attach/detach guest domains to the AMD HW-vIOMMU instance.
> > >      - Ioctls to trap certain AMD HW-vIOMMU MMIO register accesses.
> > >      - Ioctls to trap AMD HW-vIOMMU command buffer initialization.
> > 
> > No one else seems to need this kind of stuff, why is AMD different?
> > 
> > Emulation and mediation to create the vIOMMU is supposed to be in the
> > VMM side, not in the kernel. I don't want to see different models by
> > vendor.
> 
> These ioctl is not necessary for emulation, which I would agree that it
> should be done on the VMM side (e.g. QEMU). These ioctls provides necessary
> information for programming the AMD IOMMU hardware to provide
> hardware-assisted virtualized IOMMU.

You have one called 'trap', it shouldn't be like this. It seems like
this is trying to parse the command buffer in the kernel, it should be
done in the VMM.

> In this series, AMD IOMMU GCR3 table is actually setup when the
> IOMMUFD_CMD_HWPT_ALLOC is called, which the driver provides a hook to struct
> iommu_ops.domain_alloc_user(). 

That isn't entirely right either, the GCR3 should be programmed into
HW during iommu_domain attach.

> The AMD-specific information is communicated from QEMU via
> iommu_domain_user_data.iommu_hwpt_amd_v2. This is similar to INTEL
> and ARM.

This is only for requesting the iommu_domain and supplying the gcr3 VA
for later use.
> 
> Please also note that for the AMD HW-vIOMMU device model in QEMU, the guest
> memory used for IOMMU device table is trapped on when guest IOMMU driver
> programs the guest Device Table Entry (gDTE). Then QEMU reads the content of
> gDTE to extract necessary information for setting up guest (stage-1) page
> table, and calls iommufd_backend_alloc_hwpt().

This is the same as ARM. It is a two step operation, you de-duplicate
the gDTE entries (eg to share vDIDs), allocating a HWPT if it doesn't
already exist, then you attach the HWPT to the physical device the
gDTE's vRID implies.

> There are still work to be done in this to fully support PASID. I'll
> take a look at this next.

I would expect PASID work is only about invalidation?
 
> > To start focus only on user space page tables and kernel mediated
> > invalidation and fit into the same model as everyone else. This is
> > approx the same patches and uAPI you see for ARM and Intel. AFAICT
> > AMD's HW is very similar to ARM's, so you should be aligning to the
> > ARM design.
> 
> I think the user space page table is covered as described above.

I'm not sure, it doesn't look like it is what I would expect.

> it seems that user-space is supposed to call the ioctl
> IOMMUFD_CMD_HWPT_INVALIDATE for both INTEL and ARM to issue invalidation for
> stage 1 page table. Please lemme know if I misunderstand the purpose of this
> ioctl.

Yes, the VMM traps the invalidation and issues it like this.
 
> However, for AMD since the HW-vIOMMU virtualizes the guest command buffer,
> and when it sees the page table invalidation command in the guest command
> buffer, it takes care of the invalidation using information in the DomIDMap,
> which maps guest domain ID (gDomID) of a particular guest to the
> corresponding host domain ID (hDomID) of the device and invalidate the
> nested translation according to the specified PASID, DomID, and GVA.

The VMM should do all of this stuff. The VMM parses the command buffer
and the VMM converts the commands to invalidation ioctls.

I'm a unclear if AMD supports a mode where the HW can directly operate
a command/invalidation queue in the VM without virtualization. Eg DMA
from guest memory and deliver directly to the guest completion
interrupts.

If it always needs SW then the SW part should be in the VMM, not the
kernel. Then you don't need to load all these tables into the kernel.

> The DomIDMap is setup by the host IOMMU driver during VM initialization.
> When the guest IOMMU driver attaches the VFIO pass-through device to a guest
> iommu_group (i.e domain), it programs the gDTE with the gDomID. This action
> is trapped into QEMU and the gDomID is read from the gDTE and communicated
> to hypervisor via the newly proposed ioctl VIOMMU_DOMAIN_ATTACH. Now the
> DomIDMap is created for the VFIO device.

The gDomID should be supplied when the HWPT is allocated, not via new
ioctls.

> Are you referring to the IOMMU API for SVA/PASID stuff:
>   * struct iommu_domain_ops.set_dev_pasid()
>   * struct iommu_ops.remove_dev_pasid()
>   * ...

Yes
 
> If so, we are working on it separately in parallel, and will be sending out
> RFC soon.

Great

Jason



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux