On 2024/1/23 03:49, Yosry Ahmed wrote: > On Fri, Jan 19, 2024 at 3:22 AM Chengming Zhou > <zhouchengming@xxxxxxxxxxxxx> wrote: >> >> Each swapfile has one rb-tree to search the mapping of swp_entry_t to >> zswap_entry, that use a spinlock to protect, which can cause heavy lock >> contention if multiple tasks zswap_store/load concurrently. >> >> Optimize the scalability problem by splitting the zswap rb-tree into >> multiple rb-trees, each corresponds to SWAP_ADDRESS_SPACE_PAGES (64M), >> just like we did in the swap cache address_space splitting. >> >> Although this method can't solve the spinlock contention completely, it >> can mitigate much of that contention. Below is the results of kernel build >> in tmpfs with zswap shrinker enabled: >> >> linux-next zswap-lock-optimize >> real 1m9.181s 1m3.820s >> user 17m44.036s 17m40.100s >> sys 7m37.297s 4m54.622s >> >> So there are clearly improvements. > > If/when you respin this, can you mention that testing was done with a > single swapfile? I assume the improvements will be less with multiple > swapfiles as lock contention should be better. > Ok. Not sure how much improvement, may do some tests later. >> >> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> >> Acked-by: Nhat Pham <nphamcs@xxxxxxxxx> > > I think the diff in zswap_swapoff() should be much simpler with the > tree(s) cleanup removed. Otherwise LGTM. > > Acked-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Right, thanks!