The patch titled Subject: mm/zsmalloc: convert __free_zspage() to use zpdesc has been added to the -mm mm-unstable branch. Its filename is mm-zsmalloc-convert-__free_zspage-to-use-zpdesc.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-zsmalloc-convert-__free_zspage-to-use-zpdesc.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Subject: mm/zsmalloc: convert __free_zspage() to use zpdesc Date: Tue, 17 Dec 2024 00:04:43 +0900 Introduce zpdesc_is_locked() and convert __free_zspage() to use zpdesc. Link: https://lkml.kernel.org/r/20241216150450.1228021-13-42.hyeyoo@xxxxxxxxx Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Signed-off-by: Alex Shi <alexs@xxxxxxxxxx> Acked-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Tested-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/zpdesc.h | 4 ++++ mm/zsmalloc.c | 20 ++++++++++---------- 2 files changed, 14 insertions(+), 10 deletions(-) --- a/mm/zpdesc.h~mm-zsmalloc-convert-__free_zspage-to-use-zpdesc +++ a/mm/zpdesc.h @@ -165,4 +165,8 @@ static inline struct zone *zpdesc_zone(s return page_zone(zpdesc_page(zpdesc)); } +static inline bool zpdesc_is_locked(struct zpdesc *zpdesc) +{ + return folio_test_locked(zpdesc_folio(zpdesc)); +} #endif --- a/mm/zsmalloc.c~mm-zsmalloc-convert-__free_zspage-to-use-zpdesc +++ a/mm/zsmalloc.c @@ -878,23 +878,23 @@ unlock: static void __free_zspage(struct zs_pool *pool, struct size_class *class, struct zspage *zspage) { - struct page *page, *next; + struct zpdesc *zpdesc, *next; assert_spin_locked(&class->lock); VM_BUG_ON(get_zspage_inuse(zspage)); VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0); - next = page = get_first_page(zspage); + next = zpdesc = get_first_zpdesc(zspage); do { - VM_BUG_ON_PAGE(!PageLocked(page), page); - next = get_next_page(page); - reset_zpdesc(page_zpdesc(page)); - unlock_page(page); - dec_zone_page_state(page, NR_ZSPAGES); - put_page(page); - page = next; - } while (page != NULL); + VM_BUG_ON_PAGE(!zpdesc_is_locked(zpdesc), zpdesc_page(zpdesc)); + next = get_next_zpdesc(zpdesc); + reset_zpdesc(zpdesc); + zpdesc_unlock(zpdesc); + zpdesc_dec_zone_page_state(zpdesc); + zpdesc_put(zpdesc); + zpdesc = next; + } while (zpdesc != NULL); cache_free_zspage(pool, zspage); _ Patches currently in -mm which might be from 42.hyeyoo@xxxxxxxxx are mm-migrate-remove-slab-checks-in-isolate_movable_page.patch mm-zsmalloc-convert-__zs_map_object-__zs_unmap_object-to-use-zpdesc.patch mm-zsmalloc-add-and-use-pfn-zpdesc-seeking-funcs.patch mm-zsmalloc-convert-obj_malloc-to-use-zpdesc.patch mm-zsmalloc-convert-obj_allocated-and-related-helpers-to-use-zpdesc.patch mm-zsmalloc-convert-init_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-obj_to_page-and-zs_free-to-use-zpdesc.patch mm-zsmalloc-add-two-helpers-for-zs_page_migrate-and-make-it-use-zpdesc.patch mm-zsmalloc-convert-__free_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-location_to_obj-to-take-zpdesc.patch mm-zsmalloc-convert-migrate_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-get_zspage-to-take-zpdesc.patch