The patch titled Subject: slub: fix kmalloc_pagealloc_invalid_free unit test has been removed from the -mm tree. Its filename was slub-fix-kmalloc_pagealloc_invalid_free-unit-test.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Shakeel Butt <shakeelb@xxxxxxxxxx> Subject: slub: fix kmalloc_pagealloc_invalid_free unit test The unit test kmalloc_pagealloc_invalid_free makes sure that for the higher order slub allocation which goes to page allocator, the free is called with the correct address i.e. the virtual address of the head page. The commit f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free") unified the free code paths for page allocator based slub allocations but instead of using the address passed by the caller, it extracted the address from the page. Thus making the unit test kmalloc_pagealloc_invalid_free moot. So, fix this by using the address passed by the caller. Should we fix this? I think yes because dev expect kasan to catch these type of programming bugs. Link: https://lkml.kernel.org/r/20210802180819.1110165-1-shakeelb@xxxxxxxxxx Fixes: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free") Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Reported-by: Nathan Chancellor <nathan@xxxxxxxxxx> Tested-by: Nathan Chancellor <nathan@xxxxxxxxxx> Acked-by: Roman Gushchin <guro@xxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/mm/slub.c~slub-fix-kmalloc_pagealloc_invalid_free-unit-test +++ a/mm/slub.c @@ -3236,12 +3236,12 @@ struct detached_freelist { struct kmem_cache *s; }; -static inline void free_nonslab_page(struct page *page) +static inline void free_nonslab_page(struct page *page, void *object) { unsigned int order = compound_order(page); VM_BUG_ON_PAGE(!PageCompound(page), page); - kfree_hook(page_address(page)); + kfree_hook(object); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(page, order); } @@ -3282,7 +3282,7 @@ int build_detached_freelist(struct kmem_ if (!s) { /* Handle kalloc'ed objects */ if (unlikely(!PageSlab(page))) { - free_nonslab_page(page); + free_nonslab_page(page, object); p[size] = NULL; /* mark object processed */ return size; } @@ -4258,7 +4258,7 @@ void kfree(const void *x) page = virt_to_head_page(x); if (unlikely(!PageSlab(page))) { - free_nonslab_page(page); + free_nonslab_page(page, object); return; } slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); _ Patches currently in -mm which might be from shakeelb@xxxxxxxxxx are writeback-memcg-simplify-cgroup_writeback_by_id.patch memcg-switch-lruvec-stats-to-rstat.patch memcg-infrastructure-to-flush-memcg-stats.patch memcg-infrastructure-to-flush-memcg-stats-v5.patch memcg-cleanup-racy-sum-avoidance-code.patch