On Mon, Aug 02, 2021 at 11:08:18AM -0700, Shakeel Butt wrote: > The unit test kmalloc_pagealloc_invalid_free makes sure that for the > higher order slub allocation which goes to page allocator, the free is > called with the correct address i.e. the virtual address of the head > page. > > The commit f227f0faf63b ("slub: fix unreclaimable slab stat for bulk > free") unified the free code paths for page allocator based slub > allocations but instead of using the address passed by the caller, it > extracted the address from the page. Thus making the unit test > kmalloc_pagealloc_invalid_free moot. So, fix this by using the address > passed by the caller. > > Should we fix this? I think yes because dev expect kasan to catch these > type of programming bugs. > > Fixes: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free") > Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx> > Reported-by: Nathan Chancellor <nathan@xxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Roman Gushchin <guro@xxxxxx> > Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> > Cc: Christoph Lameter <cl@xxxxxxxxx> > Cc: Pekka Enberg <penberg@xxxxxxxxxx> > Cc: David Rientjes <rientjes@xxxxxxxxxx> > Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > Cc: Vlastimil Babka <vbabka@xxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Thank you for the quick fix! It passes my tests on arm64 and x86_64 in QEMU with a few different clang versions. Tested-by: Nathan Chancellor <nathan@xxxxxxxxxx> > --- > mm/slub.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index af984e4990e8..60aeedc436d5 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3236,12 +3236,12 @@ struct detached_freelist { > struct kmem_cache *s; > }; > > -static inline void free_nonslab_page(struct page *page) > +static inline void free_nonslab_page(struct page *page, void *object) > { > unsigned int order = compound_order(page); > > VM_BUG_ON_PAGE(!PageCompound(page), page); > - kfree_hook(page_address(page)); > + kfree_hook(object); > mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); > __free_pages(page, order); > } > @@ -3282,7 +3282,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, > if (!s) { > /* Handle kalloc'ed objects */ > if (unlikely(!PageSlab(page))) { > - free_nonslab_page(page); > + free_nonslab_page(page, object); > p[size] = NULL; /* mark object processed */ > return size; > } > @@ -4258,7 +4258,7 @@ void kfree(const void *x) > > page = virt_to_head_page(x); > if (unlikely(!PageSlab(page))) { > - free_nonslab_page(page); > + free_nonslab_page(page, object); > return; > } > slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); > -- > 2.32.0.554.ge1b32706d8-goog > >