On 2023/12/7 11:13, Chengming Zhou wrote: > On 2023/12/7 04:08, Nhat Pham wrote: >> On Wed, Dec 6, 2023 at 1:46 AM Chengming Zhou >> <zhouchengming@xxxxxxxxxxxxx> wrote: >>> When testing the zswap performance by using kernel build -j32 in a tmpfs >>> directory, I found the scalability of zswap rb-tree is not good, which >>> is protected by the only spinlock. That would cause heavy lock contention >>> if multiple tasks zswap_store/load concurrently. >>> >>> So a simple solution is to split the only one zswap rb-tree into multiple >>> rb-trees, each corresponds to SWAP_ADDRESS_SPACE_PAGES (64M). This idea is >>> from the commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB trunks"). >>> >>> Although this method can't solve the spinlock contention completely, it >>> can mitigate much of that contention. >> >> By how much? Do you have any stats to estimate the amount of >> contention and the reduction by this patch? > > Actually, I did some test using the linux-next 20231205 yesterday. > > Testcase: memory.max = 2G, zswap enabled, make -j32 in tmpfs. > > 20231205 +patchset > 1. !shrinker_enabled: 156s 126s > 2. shrinker_enabled: 79s 70s > > I think your zswap shrinker fix patch can solve !shrinker_enabled case. > > So will test again today using the new mm-unstable branch. > Updated test data based on today's mm-unstable branch: mm-unstable +patchset 1. !shrinker_enabled: 86s 74s 2. shrinker_enabled: 63s 61s Shows much less optimization for the shrinker_enabled case, but still much optimization for the !shrinker_enabled case. Thanks!