Changes in v2: - Fix error handling in zswap_pool_create(), thanks Dan Carpenter. - Add Reviewed-by tag from Nhat, thanks. - Improve changelog to explain about other backends, per Yu Zhao. - Link to v1: https://lore.kernel.org/r/20240617-zsmalloc-lock-mm-everything-v1-0-5e5081ea11b3@xxxxxxxxx Commit c0547d0b6a4b ("zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks") changed per-size_class lock to pool spinlock to prepare reclaim support in zsmalloc. Then reclaim support in zsmalloc had been dropped in favor of LRU reclaim in zswap, but this locking change had been left there. Obviously, the scalability of pool spinlock is worse than per-size_class. And we have a workaround that using 32 pools in zswap to avoid this scalability problem, which brings its own problems like memory waste and more memory fragmentation. So this series changes back to use per-size_class lock and using testing data in much stressed situation to verify that we can use only one pool in zswap. Note we only test and care about the zsmalloc backend, which makes sense now since zsmalloc became a lot more popular than other backends. Testing kernel build (make bzImage -j32) on tmpfs with memory.max=1GB, and zswap shrinker enabled with 10GB swapfile on ext4. real user sys 6.10.0-rc3 138.18 1241.38 1452.73 6.10.0-rc3-onepool 149.45 1240.45 1844.69 6.10.0-rc3-onepool-perclass 138.23 1242.37 1469.71 We can see from "sys" column that per-size_class locking with only one pool in zswap can have near performance with the current 32 pools. Signed-off-by: Chengming Zhou <chengming.zhou@xxxxxxxxx> --- Chengming Zhou (2): mm/zsmalloc: change back to per-size_class lock mm/zswap: use only one pool in zswap mm/zsmalloc.c | 85 +++++++++++++++++++++++++++++++++++------------------------ mm/zswap.c | 60 +++++++++++++---------------------------- 2 files changed, 69 insertions(+), 76 deletions(-) --- base-commit: 7c4c5a2ebbcea9031dbb130bb529c8eba025b16a change-id: 20240617-zsmalloc-lock-mm-everything-387ada6e3ac9 Best regards, -- Chengming Zhou <chengming.zhou@xxxxxxxxx>