This is a note to let you know that I've just added the patch titled mm: kfence: fix handling discontiguous page to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-kfence-fix-handling-discontiguous-page.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 1f2803b2660f4b04d48d065072c0ae0c9ca255fd Mon Sep 17 00:00:00 2001 From: Muchun Song <songmuchun@xxxxxxxxxxxxx> Date: Thu, 23 Mar 2023 10:50:03 +0800 Subject: mm: kfence: fix handling discontiguous page From: Muchun Song <songmuchun@xxxxxxxxxxxxx> commit 1f2803b2660f4b04d48d065072c0ae0c9ca255fd upstream. The struct pages could be discontiguous when the kfence pool is allocated via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP. This may result in setting PG_slab and memcg_data to a arbitrary address (may be not used as a struct page), which in the worst case might corrupt the kernel. So the iteration should use nth_page(). Link: https://lkml.kernel.org/r/20230323025003.94447-1-songmuchun@xxxxxxxxxxxxx Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Reviewed-by: Marco Elver <elver@xxxxxxxxxx> Reviewed-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: SeongJae Park <sjpark@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- mm/kfence/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -557,7 +557,7 @@ static unsigned long kfence_init_pool(vo * enters __slab_free() slow-path. */ for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(&pages[i]); + struct slab *slab = page_slab(nth_page(pages, i)); if (!i || (i % 2)) continue; @@ -603,7 +603,7 @@ static unsigned long kfence_init_pool(vo reset_slab: for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(&pages[i]); + struct slab *slab = page_slab(nth_page(pages, i)); if (!i || (i % 2)) continue; Patches currently in stable-queue which might be from songmuchun@xxxxxxxxxxxxx are queue-6.1/mm-kfence-fix-handling-discontiguous-page.patch queue-6.1/mm-kfence-fix-pg_slab-and-memcg_data-clearing.patch