We always use insert_zspage() and remove_zspage() to update zspage's fullness location, which will account correctly. But this special async free path use "splice" instead of remove_zspage(), so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease. Fix it by decreasing when iterate over the zspage free list. Signed-off-by: Chengming Zhou <chengming.zhou@xxxxxxxxx> --- mm/zsmalloc.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index fec1a39e5bbe..7fc25fa4e6b3 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work) class = zspage_class(pool, zspage); spin_lock(&class->lock); + class_stat_dec(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); spin_unlock(&class->lock); } -- 2.20.1