The patch titled Subject: mm: zswap: always shrink in zswap_store() if zswap_pool_reached_full has been added to the -mm mm-unstable branch. Its filename is mm-zswap-always-shrink-in-zswap_store-if-zswap_pool_reached_full.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-zswap-always-shrink-in-zswap_store-if-zswap_pool_reached_full.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Subject: mm: zswap: always shrink in zswap_store() if zswap_pool_reached_full Date: Sat, 13 Apr 2024 02:24:04 +0000 Patch series "zswap same-filled and limit checking cleanups", v3. Miscellaneous cleanups for limit checking and same-filled handling in the store path. This series was broken out of the "zswap: store zero-filled pages more efficiently" series [1]. It contains the cleanups and drops the main functional changes. [1]https://lore.kernel.org/lkml/20240325235018.2028408-1-yosryahmed@xxxxxxxxxx/ This patch (of 4): The cleanup code in zswap_store() is not pretty, particularly the 'shrink' label at the bottom that ends up jumping between cleanup labels. Instead of having a dedicated label to shrink the pool, just use zswap_pool_reached_full directly to figure out if the pool needs shrinking. zswap_pool_reached_full should be true if and only if the pool needs shrinking. The only caveat is that the value of zswap_pool_reached_full may be changed by concurrent zswap_store() calls between checking the limit and testing zswap_pool_reached_full in the cleanup code. This is fine because: - If zswap_pool_reached_full was true during limit checking then became false during the cleanup code, then someone else already took care of shrinking the pool and there is no need to queue the worker. That would be a good change. - If zswap_pool_reached_full was false during limit checking then became true during the cleanup code, then someone else hit the limit meanwhile. In this case, both threads will try to queue the worker, but it never gets queued more than once anyway. Also, calling queue_work() multiple times when the limit is hit could already happen today, so this isn't a significant change in any way. Link: https://lkml.kernel.org/r/20240413022407.785696-1-yosryahmed@xxxxxxxxxx Link: https://lkml.kernel.org/r/20240413022407.785696-2-yosryahmed@xxxxxxxxxx Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Reviewed-by: Nhat Pham <nphamcs@xxxxxxxxx> Reviewed-by: Chengming Zhou <chengming.zhou@xxxxxxxxx> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: "Maciej S. Szmigiero" <mail@xxxxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/zswap.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) --- a/mm/zswap.c~mm-zswap-always-shrink-in-zswap_store-if-zswap_pool_reached_full +++ a/mm/zswap.c @@ -1429,12 +1429,12 @@ bool zswap_store(struct folio *folio) if (cur_pages >= max_pages) { zswap_pool_limit_hit++; zswap_pool_reached_full = true; - goto shrink; + goto reject; } if (zswap_pool_reached_full) { if (cur_pages > zswap_accept_thr_pages()) - goto shrink; + goto reject; else zswap_pool_reached_full = false; } @@ -1540,6 +1540,8 @@ freepage: zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); + if (zswap_pool_reached_full) + queue_work(shrink_wq, &zswap_shrink_work); check_old: /* * If the zswap store fails or zswap is disabled, we must invalidate the @@ -1550,10 +1552,6 @@ check_old: if (entry) zswap_entry_free(entry); return false; - -shrink: - queue_work(shrink_wq, &zswap_shrink_work); - goto reject; } bool zswap_load(struct folio *folio) _ Patches currently in -mm which might be from yosryahmed@xxxxxxxxxx are mm-memcg-add-null-check-to-obj_cgroup_put.patch mm-zswap-remove-unnecessary-check-in-zswap_find_zpool.patch percpu-clean-up-all-mappings-when-pcpu_map_pages-fails.patch mm-zswap-remove-nr_zswap_stored-atomic.patch mm-zswap-always-shrink-in-zswap_store-if-zswap_pool_reached_full.patch mm-zswap-refactor-limit-checking-from-zswap_store.patch mm-zswap-move-more-same-filled-pages-checks-outside-of-zswap_store.patch mm-zswap-remove-same_filled-module-params.patch mm-zswap-remove-same_filled-module-params-fix.patch