Re: [PATCH v3 8/15] iommufd: Algorithms for PFN storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 25, 2022 at 03:12:17PM -0300, Jason Gunthorpe wrote:

> +/**
> + * iopt_unmap_domain() - Unmap without unpinning PFNs in a domain
> + * @iopt: The iopt the domain is part of
> + * @domain: The domain to unmap
> + *
> + * The caller must know that unpinning is not required, usually because there
> + * are other domains in the iopt.
> + */
> +void iopt_unmap_domain(struct io_pagetable *iopt, struct iommu_domain *domain)
> +{
> +	struct interval_tree_span_iter span;
> +
> +	interval_tree_for_each_span(&span, &iopt->area_itree, 0, ULONG_MAX)
> +		if (!span.is_hole)
> +			iommu_unmap_nofail(domain, span.start_used,
> +					   span.last_used - span.start_used +
> +						   1);
> +}

Syzkaller found a splat here.

The area_itree maintains "tombstones" of areas that are allocated, and
IOVA reserved, but not yet populated in the domain. This scheme
reduces the scope of the area_itree lock to only around allocation,
and not the much slower population step. It is the basis for parallel
mapping operation.

However, when a area is tombstoned with a area->pages == NULL it also
hasn't been mapped to a domain and thus cannot be unmapped

The above span iterator is actually iterating over all the allocated
IOVAs, not the IOVAs that have been populated to a domain. To get that
we need to check area->pages before allowing the area to become a
'used'. Thus if we race domain destruction with map and hit the
tombstone we can trigger a WARN_ON in the selftest that the IOVA being
unmapped is not mapped.

A case of incorrect optimization. The fix is simple enough, just
remove the span iterator (and audit that no other uses of area_itree
are mis-using the span iterator)

diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
index 5d4d48c80d3ad8..d1ba31491babb0 100644
--- a/drivers/iommu/iommufd/io_pagetable.c
+++ b/drivers/iommu/iommufd/io_pagetable.c
@@ -709,15 +709,14 @@ static void iopt_unfill_domain(struct io_pagetable *iopt,
 				continue;
 
 			mutex_lock(&pages->mutex);
-			if (area->storage_domain != domain) {
-				mutex_unlock(&pages->mutex);
-				continue;
-			}
-			area->storage_domain = storage_domain;
+			if (IS_ENABLED(CONFIG_IOMMUFD_TEST))
+				WARN_ON(!area->storage_domain);
+			if (area->storage_domain == domain)
+				area->storage_domain = storage_domain;
 			mutex_unlock(&pages->mutex);
-		}
 
-		iopt_unmap_domain(iopt, domain);
+			iopt_area_unmap_domain(area, domain);
+		}
 		return;
 	}
 
diff --git a/drivers/iommu/iommufd/io_pagetable.h b/drivers/iommu/iommufd/io_pagetable.h
index 0bd645da97809a..3b85fa344f6be3 100644
--- a/drivers/iommu/iommufd/io_pagetable.h
+++ b/drivers/iommu/iommufd/io_pagetable.h
@@ -65,7 +65,8 @@ void iopt_area_unfill_domains(struct iopt_area *area, struct iopt_pages *pages);
 int iopt_area_fill_domain(struct iopt_area *area, struct iommu_domain *domain);
 void iopt_area_unfill_domain(struct iopt_area *area, struct iopt_pages *pages,
 			     struct iommu_domain *domain);
-void iopt_unmap_domain(struct io_pagetable *iopt, struct iommu_domain *domain);
+void iopt_area_unmap_domain(struct iopt_area *area,
+			    struct iommu_domain *domain);
 
 static inline unsigned long iopt_area_index(struct iopt_area *area)
 {
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c
index b15967d87ffa86..dcc4620cd5d5e6 100644
--- a/drivers/iommu/iommufd/pages.c
+++ b/drivers/iommu/iommufd/pages.c
@@ -1186,22 +1186,17 @@ static void iopt_area_unfill_partial_domain(struct iopt_area *area,
 }
 
 /**
- * iopt_unmap_domain() - Unmap without unpinning PFNs in a domain
- * @iopt: The iopt the domain is part of
+ * iopt_area_unmap_domain() - Unmap without unpinning PFNs in a domain
+ * @area: The IOVA range to unmap
  * @domain: The domain to unmap
  *
  * The caller must know that unpinning is not required, usually because there
  * are other domains in the iopt.
  */
-void iopt_unmap_domain(struct io_pagetable *iopt, struct iommu_domain *domain)
+void iopt_area_unmap_domain(struct iopt_area *area, struct iommu_domain *domain)
 {
-	struct interval_tree_span_iter span;
-
-	interval_tree_for_each_span(&span, &iopt->area_itree, 0, ULONG_MAX)
-		if (!span.is_hole)
-			iommu_unmap_nofail(domain, span.start_used,
-					   span.last_used - span.start_used +
-						   1);
+	iommu_unmap_nofail(domain, iopt_area_iova(area),
+			   iopt_area_length(area));
 }
 
 /**




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux