Patch "riscv: rewrite __kernel_map_pages() to fix sleeping in invalid context" has been added to the 5.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    riscv: rewrite __kernel_map_pages() to fix sleeping in invalid context

to the 5.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     riscv-rewrite-__kernel_map_pages-to-fix-sleeping-in-.patch
and it can be found in the queue-5.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit f4a9ab6c094bec464d2264ab51a88ce2fe172dd7
Author: Nam Cao <namcao@xxxxxxxxxxxxx>
Date:   Wed May 15 07:50:40 2024 +0200

    riscv: rewrite __kernel_map_pages() to fix sleeping in invalid context
    
    [ Upstream commit fb1cf0878328fe75d47f0aed0a65b30126fcefc4 ]
    
    __kernel_map_pages() is a debug function which clears the valid bit in page
    table entry for deallocated pages to detect illegal memory accesses to
    freed pages.
    
    This function set/clear the valid bit using __set_memory(). __set_memory()
    acquires init_mm's semaphore, and this operation may sleep. This is
    problematic, because  __kernel_map_pages() can be called in atomic context,
    and thus is illegal to sleep. An example warning that this causes:
    
    BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1578
    in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2, name: kthreadd
    preempt_count: 2, expected: 0
    CPU: 0 PID: 2 Comm: kthreadd Not tainted 6.9.0-g1d4c6d784ef6 #37
    Hardware name: riscv-virtio,qemu (DT)
    Call Trace:
    [<ffffffff800060dc>] dump_backtrace+0x1c/0x24
    [<ffffffff8091ef6e>] show_stack+0x2c/0x38
    [<ffffffff8092baf8>] dump_stack_lvl+0x5a/0x72
    [<ffffffff8092bb24>] dump_stack+0x14/0x1c
    [<ffffffff8003b7ac>] __might_resched+0x104/0x10e
    [<ffffffff8003b7f4>] __might_sleep+0x3e/0x62
    [<ffffffff8093276a>] down_write+0x20/0x72
    [<ffffffff8000cf00>] __set_memory+0x82/0x2fa
    [<ffffffff8000d324>] __kernel_map_pages+0x5a/0xd4
    [<ffffffff80196cca>] __alloc_pages_bulk+0x3b2/0x43a
    [<ffffffff8018ee82>] __vmalloc_node_range+0x196/0x6ba
    [<ffffffff80011904>] copy_process+0x72c/0x17ec
    [<ffffffff80012ab4>] kernel_clone+0x60/0x2fe
    [<ffffffff80012f62>] kernel_thread+0x82/0xa0
    [<ffffffff8003552c>] kthreadd+0x14a/0x1be
    [<ffffffff809357de>] ret_from_fork+0xe/0x1c
    
    Rewrite this function with apply_to_existing_page_range(). It is fine to
    not have any locking, because __kernel_map_pages() works with pages being
    allocated/deallocated and those pages are not changed by anyone else in the
    meantime.
    
    Fixes: 5fde3db5eb02 ("riscv: add ARCH_SUPPORTS_DEBUG_PAGEALLOC support")
    Signed-off-by: Nam Cao <namcao@xxxxxxxxxxxxx>
    Cc: stable@xxxxxxxxxxxxxxx
    Reviewed-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx>
    Link: https://lore.kernel.org/r/1289ecba9606a19917bc12b6c27da8aa23e1e5ae.1715750938.git.namcao@xxxxxxxxxxxxx
    Signed-off-by: Palmer Dabbelt <palmer@xxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index ae63a0227e2fb..ce4b73b4d0e42 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -185,16 +185,32 @@ int set_direct_map_default_noflush(struct page *page)
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
+static int debug_pagealloc_set_page(pte_t *pte, unsigned long addr, void *data)
+{
+	int enable = *(int *)data;
+
+	unsigned long val = pte_val(ptep_get(pte));
+
+	if (enable)
+		val |= _PAGE_PRESENT;
+	else
+		val &= ~_PAGE_PRESENT;
+
+	set_pte(pte, __pte(val));
+
+	return 0;
+}
+
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (!debug_pagealloc_enabled())
 		return;
 
-	if (enable)
-		__set_memory((unsigned long)page_address(page), numpages,
-			     __pgprot(_PAGE_PRESENT), __pgprot(0));
-	else
-		__set_memory((unsigned long)page_address(page), numpages,
-			     __pgprot(0), __pgprot(_PAGE_PRESENT));
+	unsigned long start = (unsigned long)page_address(page);
+	unsigned long size = PAGE_SIZE * numpages;
+
+	apply_to_existing_page_range(&init_mm, start, size, debug_pagealloc_set_page, &enable);
+
+	flush_tlb_kernel_range(start, start + size);
 }
 #endif




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux