When storing same-filled pages, there is no point of checking the global zswap limit as storing them does not contribute toward the limit Move the limit checking after same-filled pages are handled. This avoids having same-filled pages skip zswap and go to disk swap if the limit is hit. It also avoids queueing the shrink worker, which may end up being unnecessary if the zswap usage goes down on its own before another store is attempted. Ignoring the memcg limits as well for same-filled pages is more controversial. Those limits are more a matter of per-workload policy. Some workloads disable zswap completely by setting memory.zswap.max = 0, and those workloads could start observing some zswap activity even after disabling zswap. Although harmless, this could cause confusion to userspace. Remain conservative and keep respecting those limits. Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> --- mm/zswap.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index a85c9235d19d3..8763a1e938441 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1404,6 +1404,7 @@ bool zswap_store(struct folio *folio) struct zswap_entry *entry, *old; struct obj_cgroup *objcg = NULL; struct mem_cgroup *memcg = NULL; + bool same_filled = false; unsigned long value; VM_WARN_ON_ONCE(!folio_test_locked(folio)); @@ -1427,7 +1428,8 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } - if (zswap_check_full()) + same_filled = zswap_is_folio_same_filled(folio, &value); + if (!same_filled && zswap_check_full()) goto reject; /* allocate entry */ @@ -1437,7 +1439,7 @@ bool zswap_store(struct folio *folio) goto reject; } - if (zswap_is_folio_same_filled(folio, &value)) { + if (same_filled) { entry->length = 0; entry->value = value; atomic_inc(&zswap_same_filled_pages); -- 2.44.0.478.gd926399ef9-goog