Re: [PATCH v7 1/3] iommufd: Add data structure for Intel VT-d stage-1 cache invalidation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/21/23 8:17 PM, Jason Gunthorpe wrote:
On Tue, Nov 21, 2023 at 02:54:15AM +0000, Tian, Kevin wrote:
From: Jason Gunthorpe <jgg@xxxxxxxxxx>
Sent: Tuesday, November 21, 2023 7:05 AM

On Mon, Nov 20, 2023 at 08:26:31AM +0000, Tian, Kevin wrote:
From: Liu, Yi L <yi.l.liu@xxxxxxxxx>
Sent: Friday, November 17, 2023 9:18 PM

This adds the data structure for flushing iotlb for the nested domain
allocated with IOMMU_HWPT_DATA_VTD_S1 type.

This only supports invalidating IOTLB, but no for device-TLB as device-TLB
invalidation will be covered automatically in the IOTLB invalidation if the
underlying IOMMU driver has enabled ATS for the affected device.

"no for device-TLB" is misleading. Here just say that cache invalidation
request applies to both IOTLB and device TLB (if ATS is enabled ...)

I think we should forward the ATS invalidation from the guest too?
That is what ARM and AMD will have to do, can we keep them all
consistent?

I understand Intel keeps track of enough stuff to know what the RIDs
are, but is it necessary to make it different?

probably ask the other way. Now intel-iommu driver always flushes
iotlb and device tlb together then is it necessary to separate them
in uAPI for no good (except doubled syscalls)? :)

I wish I knew more about Intel CC design to be able to answer that :|

Doesn't the VM issue the ATC flush command regardless? How does it
know it has a working ATC but does not need to flush it?


The Intel VT-d spec doesn't require the driver to flush iotlb and device
tlb together. Therefore, the current approach of relying on caching mode
to determine whether device TLB invalidation is necessary appears to be
a performance optimization rather than an architectural requirement.

The vIOMMU driver assumes that it is running within a VM guest when
caching mode is enabled. This assumption leads to an omission of device
TLB invalidation, relying on the hypervisor to perform a combined flush
of the IOLB and device TLB.

While this optimization aims to reduce VMEXIT overhead, it introduces
potential issues:

- When a Linux guest running on a hypervisor other than KVM/QEMU, the
  assumption of combined IOLB and device TLB flushing by the hypervisor
  may be incorrect, potentially leading to missed device TLB
  invalidation.

- The caching mode doesn't apply to first-stage translation. Therefore,
  if the driver uses first-stage translation and still relies on caching
  mode to determine device TLB invalidation, the optimization fails.

A more reasonable optimization would be to allocate a bit in the iommu
capability registers. The vIOMMU driver could then leverage this bit to
determine whether it could eliminate a device invalidation request.

Best regards,
baolu




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux