From: Kairui Song <kasong@xxxxxxxxxxx> Currently we use one swap_address_space for every 64M chunk to reduce lock contention, this is like having a set of smaller swap files inside one big swap file. But when doing swap cache look up or insert, we are still using the offset of the whole large swap file. This is OK for correctness, as the offset (key) is unique. But Xarray is specially optimized for small indexes, it creates the redix tree levels lazily to be just enough to fit the largest key stored in one Xarray. So we are wasting tree nodes unnecessarily. For 64M chunk it should only take at most 3 level to contain everything. But we are using the offset from the whole swap file, so the offset (key) value will be way beyond 64M, and so will the tree level. Optimize this by reduce the swap cache search space into 64M scope. Test with `time memhog 128G` inside a 8G memcg using 128G swap (ramdisk with SWP_SYNCHRONOUS_IO dropped, tested 3 times, results are stable. The test result is similar but the improvement is smaller if SWP_SYNCHRONOUS_IO is enabled, as swap out path can never skip swap cache): Before: 6.07user 250.74system 4:17.26elapsed 99%CPU (0avgtext+0avgdata 8373376maxresident)k 0inputs+0outputs (55major+33555018minor)pagefaults 0swaps After (+1.8% faster): 6.08user 246.09system 4:12.58elapsed 99%CPU (0avgtext+0avgdata 8373248maxresident)k 0inputs+0outputs (54major+33555027minor)pagefaults 0swaps Similar result with MySQL and sysbench using swap: Before: 94055.61 qps After (+0.8% faster): 94834.91 qps There is alse a very slight drop of radix tree node slab usage: Before: 303952K After: 302224K For this series: There are multiple places that expect mixed type of pages (page cache or swap cache), eg. migration, huge memory split; There are four helpers for that: - page_index - page_file_offset - folio_index - folio_file_pos So this series first cleaned up usage of page_index and page_file_offset, then convert folio_index and folio_file_pos to be compatible with separate offsets. And introduce a new helper swap_cache_index for swap internal usage, replace swp_offset with swap_cache_index when used to retrieve folio from swap cache. And idealy, we may want to reduce SWAP_ADDRESS_SPACE_SHIFT from 14 to 12: Default Xarray chunk offset is 6, so we have 3 level trees instead of 2 level trees just for 2 extra bits. But swap cache is based on address_space struct, with 4 times more metadata sparsely distributed in memory it waste more cacheline, the performance gain from this series is almost canceled. So firstly, just have a cleaner seperation of offsets. Patch 1/8 - 6/8: Clean up usage of page_index and page_file_offset Patch 7/8: Convert folio_index and folio_file_pos to be compatible with separate offset. Patch 8/8: Introduce swap_cache_index and use it when doing lookup in swap cache. This series is part of effort to reduce swap cache overhead, and ultimately remove SWP_SYNCHRONOUS_IO and unify swap cache usage as proposed before: https://lore.kernel.org/lkml/20240326185032.72159-1-ryncsn@xxxxxxxxx/ Kairui Song (8): NFS: remove nfs_page_lengthg and usage of page_index nilfs2: drop usage of page_index f2fs: drop usage of page_index ceph: drop usage of page_index cifs: drop usage of page_file_offset mm/swap: get the swap file offset directly mm: drop page_index/page_file_offset and convert swap helpers to use folio mm/swap: reduce swap cache search space fs/ceph/dir.c | 2 +- fs/ceph/inode.c | 2 +- fs/f2fs/data.c | 5 ++--- fs/nfs/internal.h | 19 ------------------- fs/nilfs2/bmap.c | 3 +-- fs/smb/client/file.c | 2 +- include/linux/mm.h | 13 ------------- include/linux/pagemap.h | 19 +++++++++---------- mm/huge_memory.c | 2 +- mm/memcontrol.c | 2 +- mm/mincore.c | 2 +- mm/page_io.c | 6 +++--- mm/shmem.c | 2 +- mm/swap.h | 12 ++++++++++++ mm/swap_state.c | 12 ++++++------ mm/swapfile.c | 17 +++++++++++------ 16 files changed, 51 insertions(+), 69 deletions(-) -- 2.44.0