On 2020-05-04 7:37 pm, Ajay Kumar wrote:
The current IOVA allocation code stores a cached copy of the
first allocated IOVA address node, and all the subsequent allocations
have no way to get past(higher than) the first allocated IOVA range.
Strictly they do, after that first allocation gets freed, or if the
first limit was <=32 bits and the subsequent limit >32 bits ;)
This causes issue when dma_mask for the master device is changed.
Though the DMA window is increased, the allocation code unaware of
the change, goes ahead allocating IOVA address lower than the
first allocated IOVA address.
This patch adds a check for dma_mask change in the IOVA allocation
function and resets the cached IOVA node to anchor node everytime
the dma_mask change is observed.
This isn't the right approach, since limit_pfn is by design a transient
per-allocation thing. Devices with different limits may well be
allocating from the same IOVA domain concurrently, which is the whole
reason for maintaining two cached nodes to serve the expected PCI case
of mixing 32-bit and 64-bit limits. Trying to track a per-allocation
property on a per-domain basis is just going to thrash and massively
hurt such cases.
A somewhat more appropriate fix to the allocation loop itself has been
proposed here:
https://lore.kernel.org/linux-iommu/1588795317-20879-1-git-send-email-vjitta@xxxxxxxxxxxxxx/
Robin.
NOTE:
This patch is needed to address the issue discussed in below thread:
https://www.spinics.net/lists/iommu/msg43586.html
Signed-off-by: Ajay Kumar <ajaykumar.rs@xxxxxxxxxxx>
Signed-off-by: Sathyam Panda <sathya.panda@xxxxxxxxxxx>
---
drivers/iommu/iova.c | 17 ++++++++++++++++-
include/linux/iova.h | 1 +
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 41c605b0058f..0e99975036ae 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
iovad->granule = granule;
iovad->start_pfn = start_pfn;
iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad));
+ iovad->curr_limit_pfn = iovad->dma_32bit_pfn;
iovad->max32_alloc_size = iovad->dma_32bit_pfn;
iovad->flush_cb = NULL;
iovad->fq = NULL;
@@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue);
static struct rb_node *
__get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn)
{
- if (limit_pfn <= iovad->dma_32bit_pfn)
+ if (limit_pfn <= iovad->dma_32bit_pfn) {
+ /* re-init cached node if DMA limit has changed */
+ if (limit_pfn != iovad->curr_limit_pfn) {
+ iovad->cached32_node = &iovad->anchor.node;
+ iovad->curr_limit_pfn = limit_pfn;
+ }
return iovad->cached32_node;
+ }
+ /* re-init cached node if DMA limit has changed */
+ if (limit_pfn != iovad->curr_limit_pfn) {
+ iovad->cached_node = &iovad->anchor.node;
+ iovad->curr_limit_pfn = limit_pfn;
+ }
return iovad->cached_node;
}
@@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
if (size_aligned)
align_mask <<= fls_long(size - 1);
+ if (limit_pfn != iovad->curr_limit_pfn)
+ iovad->max32_alloc_size = iovad->dma_32bit_pfn;
+
/* Walk the tree backwards */
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
if (limit_pfn <= iovad->dma_32bit_pfn &&
diff --git a/include/linux/iova.h b/include/linux/iova.h
index a0637abffee8..be2220c096ef 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -73,6 +73,7 @@ struct iova_domain {
unsigned long granule; /* pfn granularity for this domain */
unsigned long start_pfn; /* Lower limit for this domain */
unsigned long dma_32bit_pfn;
+ unsigned long curr_limit_pfn; /* Current max limit for this domain */
unsigned long max32_alloc_size; /* Size of last failed allocation */
struct iova_fq __percpu *fq; /* Flush Queue */