[PATCH 6.6 238/267] swiotlb: Enforce page alignment in swiotlb_alloc()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



6.6-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Will Deacon <will@xxxxxxxxxx>

commit 823353b7cf0ea9dfb09f5181d5fb2825d727200b upstream.

When allocating pages from a restricted DMA pool in swiotlb_alloc(),
the buffer address is blindly converted to a 'struct page *' that is
returned to the caller. In the unlikely event of an allocation bug,
page-unaligned addresses are not detected and slots can silently be
double-allocated.

Add a simple check of the buffer alignment in swiotlb_alloc() to make
debugging a little easier if something has gone wonky.

Cc: stable@xxxxxxxxxxxxxxx # v6.6+
Signed-off-by: Will Deacon <will@xxxxxxxxxx>
Reviewed-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
Reviewed-by: Petr Tesarik <petr.tesarik1@xxxxxxxxxxxxxxxxxxx>
Tested-by: Nicolin Chen <nicolinc@xxxxxxxxxx>
Tested-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
Signed-off-by: Christoph Hellwig <hch@xxxxxx>
Signed-off-by: Fabio Estevam <festevam@xxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
 kernel/dma/swiotlb.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1627,6 +1627,12 @@ struct page *swiotlb_alloc(struct device
 		return NULL;
 
 	tlb_addr = slot_addr(pool->start, index);
+	if (unlikely(!PAGE_ALIGNED(tlb_addr))) {
+		dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n",
+			      &tlb_addr);
+		swiotlb_release_slots(dev, tlb_addr);
+		return NULL;
+	}
 
 	return pfn_to_page(PFN_DOWN(tlb_addr));
 }






[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux