Shmem will support large folio allocation [1] [2] to get a better performance, however, the memory reclaim still splits the precious large folios when trying to swap-out shmem, which may lead to the memory fragmentation issue and can not take advantage of the large folio for shmeme. Moreover, the swap code already supports for swapping out large folio without split, and large folio swap-in[3] is under reviewing. Hence this patch set also supports the large folio swap-out and swap-in for shmem. Note: this patch set is currently just to show some thoughts and gather some suggestionsis, and it is based on Barry's large folio swap-in patch set [3] and my anon shmem mTHP patch set [1]. [1] https://lore.kernel.org/all/cover.1715571279.git.baolin.wang@xxxxxxxxxxxxxxxxx/ [2] https://lore.kernel.org/all/20240515055719.32577-1-da.gomez@xxxxxxxxxxx/ [3] https://lore.kernel.org/all/20240508224040.190469-6-21cnbao@xxxxxxxxx/T/ Baolin Wang (8): mm: fix shmem swapout statistic mm: vmscan: add validation before spliting shmem large folio mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM flag setting mm: shmem: support large folio allocation for shmem_replace_folio() mm: shmem: extend shmem_partial_swap_usage() to support large folio swap mm: add new 'orders' parameter for find_get_entries() and find_lock_entries() mm: shmem: use swap_free_nr() to free shmem swap entries mm: shmem: support large folio swap out drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 1 + include/linux/swap.h | 4 +- include/linux/writeback.h | 1 + mm/filemap.c | 27 ++++++- mm/internal.h | 4 +- mm/page_io.c | 4 +- mm/shmem.c | 59 ++++++++------ mm/swapfile.c | 98 ++++++++++++----------- mm/truncate.c | 8 +- mm/vmscan.c | 22 ++++- 10 files changed, 143 insertions(+), 85 deletions(-) -- 2.39.3