Patch "s390/kasan: fix insecure W+X mapping warning" has been added to the 6.4-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    s390/kasan: fix insecure W+X mapping warning

to the 6.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     s390-kasan-fix-insecure-w-x-mapping-warning.patch
and it can be found in the queue-6.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 875a5fdc92e728209fdb0de537f0e2ae9eccf7fd
Author: Alexander Gordeev <agordeev@xxxxxxxxxxxxx>
Date:   Fri May 26 14:30:29 2023 +0200

    s390/kasan: fix insecure W+X mapping warning
    
    [ Upstream commit 2ed8b509753a0454b52b2d72e982265472c8d861 ]
    
    Since commit 3b5c3f000c2e ("s390/kasan: move shadow mapping
    to decompressor") the decompressor establishes mappings for
    the shadow memory and sets initial protection attributes to
    RWX. The decompressed kernel resets protection to RW+NX
    later on.
    
    In case a shadow memory range is not aligned on page boundary
    (e.g. as result of mem= kernel command line parameter use),
    the "Checked W+X mappings: FAILED, 1 W+X pages found" warning
    hits.
    
    Reported-by: Vasily Gorbik <gor@xxxxxxxxxxxxx>
    Fixes: 557b19709da9 ("s390/kasan: move shadow mapping to decompressor")
    Reviewed-by: Vasily Gorbik <gor@xxxxxxxxxxxxx>
    Signed-off-by: Alexander Gordeev <agordeev@xxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index 5b22c6e24528a..b9dcb4ae6c59a 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -667,7 +667,15 @@ static void __init memblock_region_swap(void *a, void *b, int size)
 
 #ifdef CONFIG_KASAN
 #define __sha(x)	((unsigned long)kasan_mem_to_shadow((void *)x))
+
+static inline int set_memory_kasan(unsigned long start, unsigned long end)
+{
+	start = PAGE_ALIGN_DOWN(__sha(start));
+	end = PAGE_ALIGN(__sha(end));
+	return set_memory_rwnx(start, (end - start) >> PAGE_SHIFT);
+}
 #endif
+
 /*
  * map whole physical memory to virtual memory (identity mapping)
  * we reserve enough space in the vmalloc area for vmemmap to hotplug
@@ -737,10 +745,8 @@ void __init vmem_map_init(void)
 	}
 
 #ifdef CONFIG_KASAN
-	for_each_mem_range(i, &base, &end) {
-		set_memory_rwnx(__sha(base),
-				(__sha(end) - __sha(base)) >> PAGE_SHIFT);
-	}
+	for_each_mem_range(i, &base, &end)
+		set_memory_kasan(base, end);
 #endif
 	set_memory_rox((unsigned long)_stext,
 		       (unsigned long)(_etext - _stext) >> PAGE_SHIFT);



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux