The patch titled Subject: mm: kfence: fix handling discontiguous page has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-kfence-fix-handling-discontiguous-page.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-kfence-fix-handling-discontiguous-page.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Muchun Song <songmuchun@xxxxxxxxxxxxx> Subject: mm: kfence: fix handling discontiguous page Date: Thu, 23 Mar 2023 10:50:03 +0800 The struct pages could be discontiguous when the kfence pool is allocated via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP. So, the iteration should use nth_page(). Link: https://lkml.kernel.org/r/20230323025003.94447-1-songmuchun@xxxxxxxxxxxxx Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Reviewed-by: Marco Elver <elver@xxxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: SeongJae Park <sjpark@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/kfence/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/kfence/core.c~mm-kfence-fix-handling-discontiguous-page +++ a/mm/kfence/core.c @@ -556,7 +556,7 @@ static unsigned long kfence_init_pool(vo * enters __slab_free() slow-path. */ for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(&pages[i]); + struct slab *slab = page_slab(nth_page(pages, i)); if (!i || (i % 2)) continue; @@ -602,7 +602,7 @@ static unsigned long kfence_init_pool(vo reset_slab: for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(&pages[i]); + struct slab *slab = page_slab(nth_page(pages, i)); if (!i || (i % 2)) continue; _ Patches currently in -mm which might be from songmuchun@xxxxxxxxxxxxx are mm-kfence-fix-pg_slab-and-memcg_data-clearing.patch mm-kfence-fix-handling-discontiguous-page.patch mm-hugetlb_vmemmap-simplify-hugetlb_vmemmap_init-a-bit.patch