6.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> [ Upstream commit 0a3f6b3463014b03f6ad10eacc4d1d9af75d54a1 ] The helper calculate_psi_aligned_address() is used to convert an arbitrary range into a size-aligned one. The aligned_pages variable is calculated from input start and end, but is not adjusted when the start pfn is not aligned and the mask is adjusted, which results in an incorrect number of pages returned. The number of pages is used by qi_flush_piotlb() to flush caches for the first-stage translation. With the wrong number of pages, the cache is not synchronized, leading to inconsistencies in some cases. Fixes: c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers") Signed-off-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> Link: https://lore.kernel.org/r/20240709152643.28109-3-baolu.lu@xxxxxxxxxxxxxxx Signed-off-by: Will Deacon <will@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- drivers/iommu/intel/cache.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c index 0a3bb38a52890..44e92638c0cd1 100644 --- a/drivers/iommu/intel/cache.c +++ b/drivers/iommu/intel/cache.c @@ -246,6 +246,7 @@ static unsigned long calculate_psi_aligned_address(unsigned long start, */ shared_bits = ~(pfn ^ end_pfn) & ~bitmask; mask = shared_bits ? __ffs(shared_bits) : MAX_AGAW_PFN_WIDTH; + aligned_pages = 1UL << mask; } *_pages = aligned_pages; -- 2.43.0