These patches aim to simplify zswap_swapoff() by removing the unnecessary trees cleanup code. Patch 1 makes sure that the order of operations during swapoff is enforced correctly, making sure the simplification in patch 2 is correct in a future-proof manner. This is based on mm-unstable and v2 of the "mm/zswap: optimize the scalability of zswap rb-tree" series [1]. [1]https://lore.kernel.org/lkml/20240117-b4-zswap-lock-optimize-v2-0-b5cc55479090@xxxxxxxxxxxxx/ Yosry Ahmed (2): mm: swap: enforce updating inuse_pages at the end of swap_range_free() mm: zswap: remove unnecessary trees cleanups in zswap_swapoff() mm/swapfile.c | 18 +++++++++++++++--- mm/zswap.c | 16 +++------------- 2 files changed, 18 insertions(+), 16 deletions(-) -- 2.43.0.429.g432eaa2c6b-goog