This is a note to let you know that I've just added the patch titled iommu/vt-d: Fix incorrect cache invalidation for mm notification to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: iommu-vt-d-fix-incorrect-cache-invalidation-for-mm-notification.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From e7ad6c2a4b1aa710db94060b716f53c812cef565 Mon Sep 17 00:00:00 2001 From: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> Date: Wed, 22 Nov 2023 11:26:07 +0800 Subject: iommu/vt-d: Fix incorrect cache invalidation for mm notification From: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> commit e7ad6c2a4b1aa710db94060b716f53c812cef565 upstream. Commit 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs") moved the secondary TLB invalidations into the TLB invalidation functions to ensure that all secondary TLB invalidations happen at the same time as the CPU invalidation and added a flush-all type of secondary TLB invalidation for the batched mode, where a range of [0, -1UL) is used to indicates that the range extends to the end of the address space. However, using an end address of -1UL caused an overflow in the Intel IOMMU driver, where the end address was rounded up to the next page. As a result, both the IOTLB and device ATC were not invalidated correctly. Add a flush all helper function and call it when the invalidation range is from 0 to -1UL, ensuring that the entire caches are invalidated correctly. Fixes: 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs") Cc: stable@xxxxxxxxxxxxxxx Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Tested-by: Luo Yuzhang <yuzhang.luo@xxxxxxxxx> # QAT Tested-by: Tony Zhu <tony.zhu@xxxxxxxxx> # DSA Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Reviewed-by: Alistair Popple <apopple@xxxxxxxxxx> Signed-off-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> Link: https://lore.kernel.org/r/20231117090933.75267-1-baolu.lu@xxxxxxxxxxxxxxx Signed-off-by: Joerg Roedel <jroedel@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/iommu/intel/svm.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 50a481c895b8..ac12f76c1212 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -216,6 +216,27 @@ static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address, rcu_read_unlock(); } +static void intel_flush_svm_all(struct intel_svm *svm) +{ + struct device_domain_info *info; + struct intel_svm_dev *sdev; + + rcu_read_lock(); + list_for_each_entry_rcu(sdev, &svm->devs, list) { + info = dev_iommu_priv_get(sdev->dev); + + qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, 0, -1UL, 0); + if (info->ats_enabled) { + qi_flush_dev_iotlb_pasid(sdev->iommu, sdev->sid, info->pfsid, + svm->pasid, sdev->qdep, + 0, 64 - VTD_PAGE_SHIFT); + quirk_extra_dev_tlb_flush(info, 0, 64 - VTD_PAGE_SHIFT, + svm->pasid, sdev->qdep); + } + } + rcu_read_unlock(); +} + /* Pages have been freed at this point */ static void intel_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, struct mm_struct *mm, @@ -223,6 +244,11 @@ static void intel_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, { struct intel_svm *svm = container_of(mn, struct intel_svm, notifier); + if (start == 0 && end == -1UL) { + intel_flush_svm_all(svm); + return; + } + intel_flush_svm_range(svm, start, (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 0); } -- 2.43.0 Patches currently in stable-queue which might be from baolu.lu@xxxxxxxxxxxxxxx are queue-6.6/iommu-vt-d-fix-incorrect-cache-invalidation-for-mm-notification.patch queue-6.6/iommu-vt-d-add-mtl-to-quirk-list-to-skip-te-disabling.patch