This is a note to let you know that I've just added the patch titled swiotlb: relocate PageHighMem test away from rmem_swiotlb_setup to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: swiotlb-relocate-pagehighmem-test-away-from-rmem_swi.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 0aa4aea233e8f3c26c20b0e062d5b28eb15025ba Author: Doug Berger <opendmb@xxxxxxxxx> Date: Fri Apr 14 14:29:25 2023 -0700 swiotlb: relocate PageHighMem test away from rmem_swiotlb_setup [ Upstream commit a90922fa25370902322e9de6640e58737d459a50 ] The reservedmem_of_init_fn's are invoked very early at boot before the memory zones have even been defined. This makes it inappropriate to test whether the page corresponding to a PFN is in ZONE_HIGHMEM from within one. Removing the check allows an ARM 32-bit kernel with SPARSEMEM enabled to boot properly since otherwise we would be de-referencing an uninitialized sparsemem map to perform pfn_to_page() check. The arm64 architecture happens to work (and also has no high memory) but other 32-bit architectures could also be having similar issues. While it would be nice to provide early feedback about a reserved DMA pool residing in highmem, it is not possible to do that until the first time we try to use it, which is where the check is moved to. Fixes: 0b84e4f8b793 ("swiotlb: Add restricted DMA pool initialization") Signed-off-by: Doug Berger <opendmb@xxxxxxxxx> Signed-off-by: Florian Fainelli <f.fainelli@xxxxxxxxx> Signed-off-by: Christoph Hellwig <hch@xxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 339a990554e7f..bf1ed08ec541f 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -987,6 +987,11 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem, /* Set Per-device io tlb area to one */ unsigned int nareas = 1; + if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) { + dev_err(dev, "Restricted DMA pool must be accessible within the linear mapping."); + return -EINVAL; + } + /* * Since multiple devices can share the same pool, the private data, * io_tlb_mem struct, will be initialized by the first device attached @@ -1048,11 +1053,6 @@ static int __init rmem_swiotlb_setup(struct reserved_mem *rmem) of_get_flat_dt_prop(node, "no-map", NULL)) return -EINVAL; - if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) { - pr_err("Restricted DMA pool must be accessible within the linear mapping."); - return -EINVAL; - } - rmem->ops = &rmem_swiotlb_ops; pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n", &rmem->base, (unsigned long)rmem->size / SZ_1M);