Now that swap supports storing all mTHP sizes, avoid splitting large folios before swap-out. This benefits performance of the swap-out path by eliding split_folio_to_list(), which is expensive, and also sets us up for swapping in large folios in a future series. If the folio is partially mapped, we continue to split it since we want to avoid the extra IO overhead and storage of writing out pages uneccessarily. THP_SWPOUT and THP_SWPOUT_FALLBACK counters should continue to count events only for PMD-mappable folios to avoid user confusion. THP_SWPOUT already has the appropriate guard. Add a guard for THP_SWPOUT_FALLBACK. It may be appropriate to add per-size counters in future. Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Barry Song <v-songbaohua@xxxxxxxx> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> --- mm/vmscan.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 00adaf1cb2c3..ffc4553c8615 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split folios without a PMD map right - * away. Chances are some or all of the - * tail pages can be freed without IO. + * Split partially mapped folios right + * away. We can free the unmapped pages + * without IO. */ - if (!folio_entire_mapcount(folio) && + if (data_race(!list_empty( + &folio->_deferred_list)) && split_folio_to_list(folio, folio_list)) goto activate_locked; @@ -1240,8 +1241,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, folio_list)) goto activate_locked; #ifdef CONFIG_TRANSPARENT_HUGEPAGE - count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); - count_vm_event(THP_SWPOUT_FALLBACK); + if (nr_pages >= HPAGE_PMD_NR) { + count_memcg_folio_events(folio, + THP_SWPOUT_FALLBACK, 1); + count_vm_event( + THP_SWPOUT_FALLBACK); + } #endif if (!add_to_swap(folio)) goto activate_locked_split; -- 2.25.1