Patch "powerpc/kasan: Fix addr error caused by page alignment" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    powerpc/kasan: Fix addr error caused by page alignment

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-kasan-fix-addr-error-caused-by-page-alignmen.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 5c81a67982b51acc0fa5b7a4f799229a761019c8
Author: Jiangfeng Xiao <xiaojiangfeng@xxxxxxxxxx>
Date:   Tue Jan 23 09:45:59 2024 +0800

    powerpc/kasan: Fix addr error caused by page alignment
    
    [ Upstream commit 4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0 ]
    
    In kasan_init_region, when k_start is not page aligned, at the begin of
    for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then
    `va = block + k_cur - k_start` is less than block, the addr va is invalid,
    because the memory address space from va to block is not alloced by
    memblock_alloc, which will not be reserved by memblock_reserve later, it
    will be used by other places.
    
    As a result, memory overwriting occurs.
    
    for example:
    int __init __weak kasan_init_region(void *start, size_t size)
    {
    [...]
            /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */
            block = memblock_alloc(k_end - k_start, PAGE_SIZE);
            [...]
            for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
                    /* at the begin of for loop
                     * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)
                     * va(dcd96c00) is less than block(dcd97000), va is invalid
                     */
                    void *va = block + k_cur - k_start;
                    [...]
            }
    [...]
    }
    
    Therefore, page alignment is performed on k_start before
    memblock_alloc() to ensure the validity of the VA address.
    
    Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.")
    Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@xxxxxxxxxx>
    Signed-off-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
    Link: https://msgid.link/1705974359-43790-1-git-send-email-xiaojiangfeng@xxxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
index a70828a6d935..aa9aa11927b2 100644
--- a/arch/powerpc/mm/kasan/init_32.c
+++ b/arch/powerpc/mm/kasan/init_32.c
@@ -64,6 +64,7 @@ int __init __weak kasan_init_region(void *start, size_t size)
 	if (ret)
 		return ret;
 
+	k_start = k_start & PAGE_MASK;
 	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
 	if (!block)
 		return -ENOMEM;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux