From: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx> commit 16a75bbe480c3598b3af57a2504ea89b1e32c3ac upstream. Intel IOMMU driver implements IOTLB flush queue with domain selective or PASID selective invalidations. In this case there's no need to track IOVA page range and sync IOTLBs, which may cause significant performance hit. This patch adds a check to avoid IOVA gather page and IOTLB sync for the lazy path. The performance difference on Sapphire Rapids 100Gb NIC is improved by the following (as measured by iperf send): w/o this fix~48 Gbits/s. with this fix ~54 Gbits/s Cc: <stable@xxxxxxxxxxxxxxx> Fixes: 2a2b8eaa5b25 ("iommu: Handle freelists when using deferred flushing in iommu drivers") Reviewed-by: Robin Murphy <robin.murphy@xxxxxxx> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> Tested-by: Sanjay Kumar <sanjay.k.kumar@xxxxxxxxx> Signed-off-by: Sanjay Kumar <sanjay.k.kumar@xxxxxxxxx> Signed-off-by: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx> Link: https://lore.kernel.org/r/20230209175330.1783556-1-jacob.jun.pan@xxxxxxxxxxxxxxx Signed-off-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> Signed-off-by: Joerg Roedel <jroedel@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/iommu/intel/iommu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4359,7 +4359,12 @@ static size_t intel_iommu_unmap(struct i if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova; - iommu_iotlb_gather_add_page(domain, gather, iova, size); + /* + * We do not use page-selective IOTLB invalidation in flush queue, + * so there is no need to track page and sync iotlb. + */ + if (!iommu_iotlb_gather_queued(gather)) + iommu_iotlb_gather_add_page(domain, gather, iova, size); return size; }