Patch "swiotlb: Fix alignment checks when both allocation and DMA masks are present" has been added to the 6.6-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    swiotlb: Fix alignment checks when both allocation and DMA masks are present

to the 6.6-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     swiotlb-fix-alignment-checks-when-both-allocation-an.patch
and it can be found in the queue-6.6 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit b86ab7742b5a07494c16e9b5fd1f5a56d2365b4b
Author: Will Deacon <will@xxxxxxxxxx>
Date:   Fri Mar 8 15:28:27 2024 +0000

    swiotlb: Fix alignment checks when both allocation and DMA masks are present
    
    [ Upstream commit 51b30ecb73b481d5fac6ccf2ecb4a309c9ee3310 ]
    
    Nicolin reports that swiotlb buffer allocations fail for an NVME device
    behind an IOMMU using 64KiB pages. This is because we end up with a
    minimum allocation alignment of 64KiB (for the IOMMU to map the buffer
    safely) but a minimum DMA alignment mask corresponding to a 4KiB NVME
    page (i.e. preserving the 4KiB page offset from the original allocation).
    If the original address is not 4KiB-aligned, the allocation will fail
    because swiotlb_search_pool_area() erroneously compares these unmasked
    bits with the 64KiB-aligned candidate allocation.
    
    Tweak swiotlb_search_pool_area() so that the DMA alignment mask is
    reduced based on the required alignment of the allocation.
    
    Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers")
    Link: https://lore.kernel.org/r/cover.1707851466.git.nicolinc@xxxxxxxxxx
    Reported-by: Nicolin Chen <nicolinc@xxxxxxxxxx>
    Signed-off-by: Will Deacon <will@xxxxxxxxxx>
    Reviewed-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
    Tested-by: Nicolin Chen <nicolinc@xxxxxxxxxx>
    Tested-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ded5a1f9e8f82..675ae318f74f8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -981,8 +981,7 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool,
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
 	unsigned long max_slots = get_max_slots(boundary_mask);
-	unsigned int iotlb_align_mask =
-		dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
+	unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
 	unsigned int nslots = nr_slots(alloc_size), stride;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int index, slots_checked, count = 0, i;
@@ -993,6 +992,14 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool,
 	BUG_ON(!nslots);
 	BUG_ON(area_index >= pool->nareas);
 
+	/*
+	 * Ensure that the allocation is at least slot-aligned and update
+	 * 'iotlb_align_mask' to ignore bits that will be preserved when
+	 * offsetting into the allocation.
+	 */
+	alloc_align_mask |= (IO_TLB_SIZE - 1);
+	iotlb_align_mask &= ~alloc_align_mask;
+
 	/*
 	 * For mappings with an alignment requirement don't bother looping to
 	 * unaligned slots once we found an aligned one.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux