On Wed, 20 Mar 2024 12:31:38 -0700 Chris Li <chrisl@xxxxxxxxxx> wrote: > Very deep RB tree requires rebalance at times. That > contributes to the zswap fault latencies. Xarray does not > need to perform tree rebalance. Replacing RB tree to xarray > can have some small performance gain. > > One small difference is that xarray insert might fail with > ENOMEM, while RB tree insert does not allocate additional > memory. > > The zswap_entry size will reduce a bit due to removing the > RB node, which has two pointers and a color field. Xarray > store the pointer in the xarray tree rather than the > zswap_entry. Every entry has one pointer from the xarray > tree. Overall, switching to xarray should save some memory, > if the swap entries are densely packed. > > Notice the zswap_rb_search and zswap_rb_insert always > followed by zswap_rb_erase. Use xa_erase and xa_store > directly. That saves one tree lookup as well. > > Remove zswap_invalidate_entry due to no need to call > zswap_rb_erase any more. Use zswap_free_entry instead. > > The "struct zswap_tree" has been replaced by "struct xarray". > The tree spin lock has transferred to the xarray lock. > > Run the kernel build testing 10 times for each version, averages: > (memory.max=2GB, zswap shrinker and writeback enabled, > one 50GB swapfile, 24 HT core, 32 jobs) > So this conflits with Johannes's "mm: zswap: fix data loss on SWP_SYNCHRONOUS_IO devices", right in the critical part of zswap_load(). Naive resolution of that conflict would have resulted in basically reverting Johannes's fix. That fix is cc:stable so we do want it to have a clean run in linux-next before sending it upstream. So I'll drop this patch ("zswap: replace RB tree with xarray") for now. Please redo it against latest mm-unstable and of course, be sure to preserve Johannes's fix, thanks.