From: Yunfei Wang <yf.wang@xxxxxxxxxxxx> In alloc_iova_fast function, if __alloc_and_insert_iova_range fail, alloc_iova_fast will try flushing rcache and retry alloc iova, but this has an issue: Since __alloc_and_insert_iova_range fail will set the current alloc iova size to max32_alloc_size (iovad->max32_alloc_size = size), when the retry is executed into the __alloc_and_insert_iova_range function, the retry action will be blocked by the check condition (size >= iovad->max32_alloc_size) and goto iova32_full directly, causes the action of retry regular alloc iova in __alloc_and_insert_iova_range to not actually be executed. Based on the above, so need reset max32_alloc_size before retry alloc iova when alloc iova fail, that is set the initial dma_32bit_pfn value of iovad to max32_alloc_size, so that the action of retry alloc iova in __alloc_and_insert_iova_range can be executed. Signed-off-by: Yunfei Wang <yf.wang@xxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # 5.10.* --- v2: Cc stable@xxxxxxxxxxxxxxx 1. This patch needs to be merged stable branch, add stable@xxxxxxxxxxxxxxx in mail list. --- drivers/iommu/iova.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index b28c9435b898..0c085ae8293f 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -453,6 +453,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size, retry: new_iova = alloc_iova(iovad, size, limit_pfn, true); if (!new_iova) { + unsigned long flags; unsigned int cpu; if (!flush_rcache) @@ -463,6 +464,12 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size, for_each_online_cpu(cpu) free_cpu_cached_iovas(cpu, iovad); free_global_cached_iovas(iovad); + + /* Reset max32_alloc_size after flushing rcache for retry */ + spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); + iovad->max32_alloc_size = iovad->dma_32bit_pfn; + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + goto retry; } -- 2.18.0