The quilt patch titled Subject: mm/zsmalloc: clarify class per-fullness zspage counts has been removed from the -mm tree. Its filename was mm-zsmalloc-fix-class-per-fullness-zspage-counts.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Chengming Zhou <chengming.zhou@xxxxxxxxx> Subject: mm/zsmalloc: clarify class per-fullness zspage counts Date: Thu, 27 Jun 2024 15:59:58 +0800 We always use insert_zspage() and remove_zspage() to update zspage's fullness location, which will account correctly. But this special async free path use "splice" instead of remove_zspage(), so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease. Clean things up by decreasing when iterate over the zspage free list. This doesn't actually fix anything. ZS_INUSE_RATIO_0 is just a "placeholder" which is never used anywhere. Link: https://lkml.kernel.org/r/20240627075959.611783-1-chengming.zhou@xxxxxxxxx Signed-off-by: Chengming Zhou <chengming.zhou@xxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/zsmalloc.c | 1 + 1 file changed, 1 insertion(+) --- a/mm/zsmalloc.c~mm-zsmalloc-fix-class-per-fullness-zspage-counts +++ a/mm/zsmalloc.c @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct wor class = zspage_class(pool, zspage); spin_lock(&class->lock); + class_stat_dec(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); spin_unlock(&class->lock); } _ Patches currently in -mm which might be from chengming.zhou@xxxxxxxxx are