Patch "swiotlb: always set the number of areas before allocating the pool" has been added to the 6.4-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    swiotlb: always set the number of areas before allocating the pool

to the 6.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     swiotlb-always-set-the-number-of-areas-before-alloca.patch
and it can be found in the queue-6.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 3e0aead0125493b7ae55fcf63b7b58f4cd13b0aa
Author: Petr Tesarik <petr.tesarik.ext@xxxxxxxxxx>
Date:   Mon Jun 26 15:01:03 2023 +0200

    swiotlb: always set the number of areas before allocating the pool
    
    [ Upstream commit aabd12609f91155f26584508b01f548215cc3c0c ]
    
    The number of areas defaults to the number of possible CPUs. However, the
    total number of slots may have to be increased after adjusting the number
    of areas. Consequently, the number of areas must be determined before
    allocating the memory pool. This is even explained with a comment in
    swiotlb_init_remap(), but swiotlb_init_late() adjusts the number of areas
    after slots are already allocated. The areas may end up being smaller than
    IO_TLB_SEGSIZE, which breaks per-area locking.
    
    While fixing swiotlb_init_late(), move all relevant comments before the
    definition of swiotlb_adjust_nareas() and convert them to kernel-doc.
    
    Fixes: 20347fca71a3 ("swiotlb: split up the global swiotlb lock")
    Signed-off-by: Petr Tesarik <petr.tesarik.ext@xxxxxxxxxx>
    Reviewed-by: Roberto Sassu <roberto.sassu@xxxxxxxxxx>
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index af2e304c672c4..16f53d8c51bcf 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -115,9 +115,16 @@ static bool round_up_default_nslabs(void)
 	return true;
 }
 
+/**
+ * swiotlb_adjust_nareas() - adjust the number of areas and slots
+ * @nareas:	Desired number of areas. Zero is treated as 1.
+ *
+ * Adjust the default number of areas in a memory pool.
+ * The default size of the memory pool may also change to meet minimum area
+ * size requirements.
+ */
 static void swiotlb_adjust_nareas(unsigned int nareas)
 {
-	/* use a single area when non is specified */
 	if (!nareas)
 		nareas = 1;
 	else if (!is_power_of_2(nareas))
@@ -298,10 +305,6 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 	if (swiotlb_force_disable)
 		return;
 
-	/*
-	 * default_nslabs maybe changed when adjust area number.
-	 * So allocate bounce buffer after adjusting area number.
-	 */
 	if (!default_nareas)
 		swiotlb_adjust_nareas(num_possible_cpus());
 
@@ -363,6 +366,9 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 	if (swiotlb_force_disable)
 		return 0;
 
+	if (!default_nareas)
+		swiotlb_adjust_nareas(num_possible_cpus());
+
 retry:
 	order = get_order(nslabs << IO_TLB_SHIFT);
 	nslabs = SLABS_PER_PAGE << order;
@@ -397,9 +403,6 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 			(PAGE_SIZE << order) >> 20);
 	}
 
-	if (!default_nareas)
-		swiotlb_adjust_nareas(num_possible_cpus());
-
 	area_order = get_order(array_size(sizeof(*mem->areas),
 		default_nareas));
 	mem->areas = (struct io_tlb_area *)



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux