This is a note to let you know that I've just added the patch titled s390/boot: cleanup number of page table levels setup to the 6.5-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: s390-boot-cleanup-number-of-page-table-levels-setup.patch and it can be found in the queue-6.5 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 1433000ff520b2e2411103591369e6afcbda459b Author: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Date: Thu Jul 6 12:28:17 2023 +0200 s390/boot: cleanup number of page table levels setup [ Upstream commit 8ddccc8a7d06f7ea4d8579970c95609d1b1de77b ] The separate vmalloc area size check against _REGION2_SIZE is needed in case user provided insanely large value using vmalloc= kernel command line parameter. That could lead to overflow and selecting 3 page table levels instead of 4. Use size_add() for the overflow check and get rid of the extra vmalloc area check. With the current values of CONFIG_MAX_PHYSMEM_BITS and PAGES_PER_SECTION the sum of maximal possible size of identity mapping and vmemmap area (derived from these macros) plus modules area size MODULES_LEN can not overflow. Thus, that sum is used as first addend while vmalloc area size is second addend for size_add(). Suggested-by: Heiko Carstens <hca@xxxxxxxxxxxxx> Acked-by: Heiko Carstens <hca@xxxxxxxxxxxxx> Signed-off-by: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Signed-off-by: Heiko Carstens <hca@xxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c index 64bd7ac3e35d1..f8d0550e5d2af 100644 --- a/arch/s390/boot/startup.c +++ b/arch/s390/boot/startup.c @@ -176,6 +176,7 @@ static unsigned long setup_kernel_memory_layout(void) unsigned long asce_limit; unsigned long rte_size; unsigned long pages; + unsigned long vsize; unsigned long vmax; pages = ident_map_size / PAGE_SIZE; @@ -183,11 +184,9 @@ static unsigned long setup_kernel_memory_layout(void) vmemmap_size = SECTION_ALIGN_UP(pages) * sizeof(struct page); /* choose kernel address space layout: 4 or 3 levels. */ - vmemmap_start = round_up(ident_map_size, _REGION3_SIZE); - if (IS_ENABLED(CONFIG_KASAN) || - vmalloc_size > _REGION2_SIZE || - vmemmap_start + vmemmap_size + vmalloc_size + MODULES_LEN > - _REGION2_SIZE) { + vsize = round_up(ident_map_size, _REGION3_SIZE) + vmemmap_size + MODULES_LEN; + vsize = size_add(vsize, vmalloc_size); + if (IS_ENABLED(CONFIG_KASAN) || (vsize > _REGION2_SIZE)) { asce_limit = _REGION1_SIZE; rte_size = _REGION2_SIZE; } else {