On 2024/2/17 13:36, Barry Song wrote: > From: Barry Song <v-songbaohua@xxxxxxxx> > > We used to rely on the returned -ENOSPC of zpool_malloc() to increase > reject_compress_poor. But the code wouldn't get to there after commit > 744e1885922a ("crypto: scomp - fix req->dst buffer overflow") as the > new code will goto out immediately after the special compression case > happens. So there might be no longer a chance to execute zpool_malloc > now. We are incorrectly increasing zswap_reject_compress_fail instead. > Thus, we need to fix the counters handling right after compressions > return ENOSPC. This patch also centralizes the counters handling for > all of compress_poor, compress_fail and alloc_fail. > > Fixes: 744e1885922a ("crypto: scomp - fix req->dst buffer overflow") > Cc: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> > Cc: Nhat Pham <nphamcs@xxxxxxxxx> > Cc: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> > Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> LGTM, thanks! Reviewed-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> > --- > -v2: > * correct the fixes target according to Yosry, Chengming, Nhat's > comments; > * centralize the counters handling according to Yosry's comment > > mm/zswap.c | 25 ++++++++++++------------- > 1 file changed, 12 insertions(+), 13 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 350dd2fc8159..47cf07d56362 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -1498,6 +1498,7 @@ bool zswap_store(struct folio *folio) > struct zswap_tree *tree = zswap_trees[type]; > struct zswap_entry *entry, *dupentry; > struct scatterlist input, output; > + int comp_ret = 0, alloc_ret = 0; > struct crypto_acomp_ctx *acomp_ctx; > struct obj_cgroup *objcg = NULL; > struct mem_cgroup *memcg = NULL; > @@ -1508,7 +1509,6 @@ bool zswap_store(struct folio *folio) > char *buf; > u8 *src, *dst; > gfp_t gfp; > - int ret; > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > @@ -1621,28 +1621,20 @@ bool zswap_store(struct folio *folio) > * but in different threads running on different cpu, we have different > * acomp instance, so multiple threads can do (de)compression in parallel. > */ > - ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); > + comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); > dlen = acomp_ctx->req->dlen; > > - if (ret) { > - zswap_reject_compress_fail++; > + if (comp_ret) > goto put_dstmem; > - } > > /* store */ > zpool = zswap_find_zpool(entry); > gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; > if (zpool_malloc_support_movable(zpool)) > gfp |= __GFP_HIGHMEM | __GFP_MOVABLE; > - ret = zpool_malloc(zpool, dlen, gfp, &handle); > - if (ret == -ENOSPC) { > - zswap_reject_compress_poor++; > - goto put_dstmem; > - } > - if (ret) { > - zswap_reject_alloc_fail++; > + alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle); > + if (alloc_ret) > goto put_dstmem; > - } > buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); > memcpy(buf, dst, dlen); > zpool_unmap_handle(zpool, handle); > @@ -1689,6 +1681,13 @@ bool zswap_store(struct folio *folio) > return true; > > put_dstmem: > + if (comp_ret == -ENOSPC || alloc_ret == -ENOSPC) > + zswap_reject_compress_poor++; > + else if (comp_ret) > + zswap_reject_compress_fail++; > + else if (alloc_ret) > + zswap_reject_alloc_fail++; > + > mutex_unlock(&acomp_ctx->mutex); > put_pool: > zswap_pool_put(entry->pool);