On (24/06/27 13:33), Andrew Morton wrote: > On Thu, 27 Jun 2024 15:59:58 +0800 Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote: > > We always use insert_zspage() and remove_zspage() to update zspage's > > fullness location, which will account correctly. > > > > But this special async free path use "splice" instead of remove_zspage(), > > so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease. > > > > Fix it by decreasing when iterate over the zspage free list. > > > > ... > > > > Signed-off-by: Chengming Zhou <chengming.zhou@xxxxxxxxx> > > +++ b/mm/zsmalloc.c > > @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work) > > > > class = zspage_class(pool, zspage); > > spin_lock(&class->lock); > > + class_stat_dec(class, ZS_INUSE_RATIO_0, 1); > > __free_zspage(pool, class, zspage); > > spin_unlock(&class->lock); > > } > > What are the runtime effects of this bug? Should we backport the fix > into earlier kernels? And are we able to identify the appropriate > Fixes: target? I don't think this has any run-time visible effects. Class stats (ZS_OBJS_ALLOCATED and ZS_OBJS_INUSE) play their role during compaction (defragmentation), but ZS_INUSE_RATIO_0 is for zspage fullness type, moreover for empty zspage, which we don't look at during compaction. With CONFIG_ZSMALLOC_STAT enabled we show pool/class stats to the users via zs_stats_size_show() but ZS_INUSE_RATIO_0 is ignored. So no one (external) should know what value is there and ZS_INUSE_RATIO_0 should never be of any importance to zsmalloc (internally). Code in question (async_free_zspage()) was introduced by 48b4800a1c6af in 2016-07-26, so it's been a long time.