The patch titled Subject: mm/zsmalloc: remove get_zspage_mapping() has been added to the -mm mm-unstable branch. Its filename is mm-zsmalloc-remove-get_zspage_mapping.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-zsmalloc-remove-get_zspage_mapping.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> Subject: mm/zsmalloc: remove get_zspage_mapping() Date: Tue, 20 Feb 2024 06:53:02 +0000 Actually we seldom use the class_idx returned from get_zspage_mapping(), only the zspage->fullness is useful, just use zspage->fullness to remove this helper. Note zspage->fullness is not stable outside pool->lock, remove redundant "VM_BUG_ON(fullness != ZS_INUSE_RATIO_0)" in async_free_zspage() since we already have the same VM_BUG_ON() in __free_zspage(), which is safe to access zspage->fullness with pool->lock held. Link: https://lkml.kernel.org/r/20240220-b4-zsmalloc-cleanup-v1-3-5c5ee4ccdd87@xxxxxxxxxxxxx Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Nhat Pham <nphamcs@xxxxxxxxx> Cc: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/zsmalloc.c | 28 ++++------------------------ 1 file changed, 4 insertions(+), 24 deletions(-) --- a/mm/zsmalloc.c~mm-zsmalloc-remove-get_zspage_mapping +++ a/mm/zsmalloc.c @@ -470,16 +470,6 @@ static inline void set_freeobj(struct zs zspage->freeobj = obj; } -static void get_zspage_mapping(struct zspage *zspage, - unsigned int *class_idx, - int *fullness) -{ - BUG_ON(zspage->magic != ZSPAGE_MAGIC); - - *fullness = zspage->fullness; - *class_idx = zspage->class; -} - static struct size_class *zspage_class(struct zs_pool *pool, struct zspage *zspage) { @@ -708,12 +698,10 @@ static void remove_zspage(struct size_cl */ static int fix_fullness_group(struct size_class *class, struct zspage *zspage) { - int class_idx; - int currfg, newfg; + int newfg; - get_zspage_mapping(zspage, &class_idx, &currfg); newfg = get_fullness_group(class, zspage); - if (newfg == currfg) + if (newfg == zspage->fullness) goto out; remove_zspage(class, zspage); @@ -835,15 +823,11 @@ static void __free_zspage(struct zs_pool struct zspage *zspage) { struct page *page, *next; - int fg; - unsigned int class_idx; - - get_zspage_mapping(zspage, &class_idx, &fg); assert_spin_locked(&pool->lock); VM_BUG_ON(get_zspage_inuse(zspage)); - VM_BUG_ON(fg != ZS_INUSE_RATIO_0); + VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0); next = page = get_first_page(zspage); do { @@ -1857,8 +1841,6 @@ static void async_free_zspage(struct wor { int i; struct size_class *class; - unsigned int class_idx; - int fullness; struct zspage *zspage, *tmp; LIST_HEAD(free_pages); struct zs_pool *pool = container_of(work, struct zs_pool, @@ -1879,10 +1861,8 @@ static void async_free_zspage(struct wor list_del(&zspage->list); lock_zspage(zspage); - get_zspage_mapping(zspage, &class_idx, &fullness); - VM_BUG_ON(fullness != ZS_INUSE_RATIO_0); - class = pool->size_class[class_idx]; spin_lock(&pool->lock); + class = zspage_class(pool, zspage); __free_zspage(pool, class, zspage); spin_unlock(&pool->lock); } _ Patches currently in -mm which might be from zhouchengming@xxxxxxxxxxxxx are mm-zswap-invalidate-duplicate-entry-when-zswap_enabled.patch mm-zswap-make-sure-each-swapfile-always-have-zswap-rb-tree.patch mm-zswap-split-zswap-rb-tree.patch mm-zswap-fix-race-between-lru-writeback-and-swapoff.patch mm-list_lru-remove-list_lru_putback.patch mm-zswap-add-more-comments-in-shrink_memcg_cb.patch mm-zswap-invalidate-zswap-entry-when-swap-entry-free.patch mm-zswap-stop-lru-list-shrinking-when-encounter-warm-region.patch mm-zswap-remove-duplicate_entry-debug-value.patch mm-zswap-only-support-zswap_exclusive_loads_enabled.patch mm-zswap-zswap-entry-doesnt-need-refcount-anymore.patch mm-zswap-optimize-and-cleanup-the-invalidation-of-duplicate-entry.patch mm-zsmalloc-fix-migrate_write_lock-when-config_compaction.patch mm-zsmalloc-remove-migrate_write_lock_nested.patch mm-zsmalloc-remove-unused-zspage-isolated.patch mm-zswap-global-lru-and-shrinker-shared-by-all-zswap_pools.patch mm-zswap-change-zswap_pool-kref-to-percpu_ref.patch mm-zsmalloc-remove-set_zspage_mapping.patch mm-zsmalloc-remove_zspage-dont-need-fullness-parameter.patch mm-zsmalloc-remove-get_zspage_mapping.patch