The patch titled Subject: mm/zsmalloc: convert init_zspage() to use zpdesc has been added to the -mm mm-unstable branch. Its filename is mm-zsmalloc-convert-init_zspage-to-use-zpdesc.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-zsmalloc-convert-init_zspage-to-use-zpdesc.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Subject: mm/zsmalloc: convert init_zspage() to use zpdesc Date: Tue, 17 Dec 2024 00:04:39 +0900 Replace get_first/next_page func series and kmap_atomic to new helper, no functional change. Link: https://lkml.kernel.org/r/20241216150450.1228021-9-42.hyeyoo@xxxxxxxxx Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Signed-off-by: Alex Shi <alexs@xxxxxxxxxx> Acked-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Tested-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/zsmalloc.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) --- a/mm/zsmalloc.c~mm-zsmalloc-convert-init_zspage-to-use-zpdesc +++ a/mm/zsmalloc.c @@ -925,16 +925,16 @@ static void init_zspage(struct size_clas { unsigned int freeobj = 1; unsigned long off = 0; - struct page *page = get_first_page(zspage); + struct zpdesc *zpdesc = get_first_zpdesc(zspage); - while (page) { - struct page *next_page; + while (zpdesc) { + struct zpdesc *next_zpdesc; struct link_free *link; void *vaddr; - set_first_obj_offset(page, off); + set_first_obj_offset(zpdesc_page(zpdesc), off); - vaddr = kmap_local_page(page); + vaddr = kmap_local_zpdesc(zpdesc); link = (struct link_free *)vaddr + off / sizeof(*link); while ((off += class->size) < PAGE_SIZE) { @@ -947,8 +947,8 @@ static void init_zspage(struct size_clas * page, which must point to the first object on the next * page (if present) */ - next_page = get_next_page(page); - if (next_page) { + next_zpdesc = get_next_zpdesc(zpdesc); + if (next_zpdesc) { link->next = freeobj++ << OBJ_TAG_BITS; } else { /* @@ -958,7 +958,7 @@ static void init_zspage(struct size_clas link->next = -1UL << OBJ_TAG_BITS; } kunmap_local(vaddr); - page = next_page; + zpdesc = next_zpdesc; off %= PAGE_SIZE; } _ Patches currently in -mm which might be from 42.hyeyoo@xxxxxxxxx are mm-migrate-remove-slab-checks-in-isolate_movable_page.patch mm-zsmalloc-convert-__zs_map_object-__zs_unmap_object-to-use-zpdesc.patch mm-zsmalloc-add-and-use-pfn-zpdesc-seeking-funcs.patch mm-zsmalloc-convert-obj_malloc-to-use-zpdesc.patch mm-zsmalloc-convert-obj_allocated-and-related-helpers-to-use-zpdesc.patch mm-zsmalloc-convert-init_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-obj_to_page-and-zs_free-to-use-zpdesc.patch mm-zsmalloc-add-two-helpers-for-zs_page_migrate-and-make-it-use-zpdesc.patch mm-zsmalloc-convert-__free_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-location_to_obj-to-take-zpdesc.patch mm-zsmalloc-convert-migrate_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-get_zspage-to-take-zpdesc.patch