Patch "vfio/type1: Respect IOMMU reserved regions in vfio_test_domain_fgsp()" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    vfio/type1: Respect IOMMU reserved regions in vfio_test_domain_fgsp()

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     vfio-type1-respect-iommu-reserved-regions-in-vfio_te.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit f0fa37772ed7be147154796fd36d4ba1bcf23ff4
Author: Niklas Schnelle <schnelle@xxxxxxxxxxxxx>
Date:   Tue Jan 10 17:44:27 2023 +0100

    vfio/type1: Respect IOMMU reserved regions in vfio_test_domain_fgsp()
    
    [ Upstream commit 895c0747f726bb50c9b7a805613a61d1b6f9fa06 ]
    
    Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain
    aperture shrinking") the s390 IOMMU driver uses reserved regions for the
    system provided DMA ranges of PCI devices. Previously it reduced the
    size of the IOMMU aperture and checked it on each mapping operation.
    On current machines the system denies use of DMA addresses below 2^32 for
    all PCI devices.
    
    Usually mapping IOVAs in a reserved regions is harmless until a DMA
    actually tries to utilize the mapping. However on s390 there is
    a virtual PCI device called ISM which is implemented in firmware and
    used for cross LPAR communication. Unlike real PCI devices this device
    does not use the hardware IOMMU but inspects IOMMU translation tables
    directly on IOTLB flush (s390 RPCIT instruction). If it detects IOVA
    mappings outside the allowed ranges it goes into an error state. This
    error state then causes the device to be unavailable to the KVM guest.
    
    Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA
    address 0 irrespective of the IOMMUs reserved regions. Even if usually
    harmless this seems wrong in the general case so instead go through the
    freshly updated IOVA list and try to find a range that isn't reserved,
    and fits 2 pages, is PAGE_SIZE * 2 aligned. If found use that for
    testing for fine grained super pages.
    
    Fixes: af029169b8fd ("vfio/type1: Check reserved region conflict and update iova list")
    Signed-off-by: Niklas Schnelle <schnelle@xxxxxxxxxxxxx>
    Reviewed-by: Matthew Rosato <mjrosato@xxxxxxxxxxxxx>
    Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/20230110164427.4051938-2-schnelle@xxxxxxxxxxxxx
    Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 23c24fe98c00..2209372f236d 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1856,24 +1856,33 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
  * significantly boosts non-hugetlbfs mappings and doesn't seem to hurt when
  * hugetlbfs is in use.
  */
-static void vfio_test_domain_fgsp(struct vfio_domain *domain)
+static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *regions)
 {
-	struct page *pages;
 	int ret, order = get_order(PAGE_SIZE * 2);
+	struct vfio_iova *region;
+	struct page *pages;
+	dma_addr_t start;
 
 	pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
 	if (!pages)
 		return;
 
-	ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2,
-			IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
-	if (!ret) {
-		size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE);
+	list_for_each_entry(region, regions, list) {
+		start = ALIGN(region->start, PAGE_SIZE * 2);
+		if (start >= region->end || (region->end - start < PAGE_SIZE * 2))
+			continue;
 
-		if (unmapped == PAGE_SIZE)
-			iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE);
-		else
-			domain->fgsp = true;
+		ret = iommu_map(domain->domain, start, page_to_phys(pages), PAGE_SIZE * 2,
+				IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
+		if (!ret) {
+			size_t unmapped = iommu_unmap(domain->domain, start, PAGE_SIZE);
+
+			if (unmapped == PAGE_SIZE)
+				iommu_unmap(domain->domain, start + PAGE_SIZE, PAGE_SIZE);
+			else
+				domain->fgsp = true;
+		}
+		break;
 	}
 
 	__free_pages(pages, order);
@@ -2326,7 +2335,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 		}
 	}
 
-	vfio_test_domain_fgsp(domain);
+	vfio_test_domain_fgsp(domain, &iova_copy);
 
 	/* replay mappings on new domains */
 	ret = vfio_iommu_replay(iommu, domain);



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux