On Tue, Jan 04, 2022 at 01:10:25AM +0100, Vlastimil Babka wrote: > From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> > > Convert kfree(), kmem_cache_free() and ___cache_free() to resolve object > addresses to struct slab, using folio as intermediate step where needed. > Keep passing the result as struct page for now in preparation for mass > conversion of internal functions. > > [ vbabka@xxxxxxx: Use folio as intermediate step when checking for > large kmalloc pages, and when freeing them - rename > free_nonslab_page() to free_large_kmalloc() that takes struct folio ] > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > mm/slub.c | 29 ++++++++++++++++------------- > 1 file changed, 16 insertions(+), 13 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index a45b74d2712f..acf2608a57c5 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3517,7 +3517,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, > #ifdef CONFIG_KASAN_GENERIC > void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) > { > - do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); > + do_slab_free(cache, slab_page(virt_to_slab(x)), x, NULL, 1, addr); > } > #endif > > @@ -3527,7 +3527,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) > if (!s) > return; > trace_kmem_cache_free(_RET_IP_, x, s->name); > - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); > + slab_free(s, slab_page(virt_to_slab(x)), x, NULL, 1, _RET_IP_); > } > EXPORT_SYMBOL(kmem_cache_free); > > @@ -3539,16 +3539,17 @@ struct detached_freelist { > struct kmem_cache *s; > }; > > -static inline void free_nonslab_page(struct page *page, void *object) > +static inline void free_large_kmalloc(struct folio *folio, void *object) It's way more clear now what's it's all about. Thanks! Reviewed-by: Roman Gushchin <guro@xxxxxx>