The patch titled Subject: slub: fix kmalloc_pagealloc_invalid_free unit test has been added to the -mm tree. Its filename is slub-fix-kmalloc_pagealloc_invalid_free-unit-test.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/slub-fix-kmalloc_pagealloc_invalid_free-unit-test.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/slub-fix-kmalloc_pagealloc_invalid_free-unit-test.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Shakeel Butt <shakeelb@xxxxxxxxxx> Subject: slub: fix kmalloc_pagealloc_invalid_free unit test The unit test kmalloc_pagealloc_invalid_free makes sure that for the higher order slub allocation which goes to page allocator, the free is called with the correct address i.e. the virtual address of the head page. The commit f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free") unified the free code paths for page allocator based slub allocations but instead of using the address passed by the caller, it extracted the address from the page. Thus making the unit test kmalloc_pagealloc_invalid_free moot. So, fix this by using the address passed by the caller. Should we fix this? I think yes because dev expect kasan to catch these type of programming bugs. Link: https://lkml.kernel.org/r/20210802180819.1110165-1-shakeelb@xxxxxxxxxx Fixes: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free") Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Reported-by: Nathan Chancellor <nathan@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/mm/slub.c~slub-fix-kmalloc_pagealloc_invalid_free-unit-test +++ a/mm/slub.c @@ -3236,12 +3236,12 @@ struct detached_freelist { struct kmem_cache *s; }; -static inline void free_nonslab_page(struct page *page) +static inline void free_nonslab_page(struct page *page, void *object) { unsigned int order = compound_order(page); VM_BUG_ON_PAGE(!PageCompound(page), page); - kfree_hook(page_address(page)); + kfree_hook(object); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(page, order); } @@ -3282,7 +3282,7 @@ int build_detached_freelist(struct kmem_ if (!s) { /* Handle kalloc'ed objects */ if (unlikely(!PageSlab(page))) { - free_nonslab_page(page); + free_nonslab_page(page, object); p[size] = NULL; /* mark object processed */ return size; } @@ -4258,7 +4258,7 @@ void kfree(const void *x) page = virt_to_head_page(x); if (unlikely(!PageSlab(page))) { - free_nonslab_page(page); + free_nonslab_page(page, object); return; } slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); _ Patches currently in -mm which might be from shakeelb@xxxxxxxxxx are slub-fix-kmalloc_pagealloc_invalid_free-unit-test.patch writeback-memcg-simplify-cgroup_writeback_by_id.patch memcg-switch-lruvec-stats-to-rstat.patch memcg-infrastructure-to-flush-memcg-stats.patch memcg-infrastructure-to-flush-memcg-stats-v5.patch memcg-cleanup-racy-sum-avoidance-code.patch