On 02/04/2024 14:10, Ryan Roberts wrote: > On 28/03/2024 08:18, Barry Song wrote: >> On Thu, Mar 28, 2024 at 3:45 AM Ryan Roberts <ryan.roberts@xxxxxxx> wrote: >>> >>> Now that swap supports storing all mTHP sizes, avoid splitting large >>> folios before swap-out. This benefits performance of the swap-out path >>> by eliding split_folio_to_list(), which is expensive, and also sets us >>> up for swapping in large folios in a future series. >>> >>> If the folio is partially mapped, we continue to split it since we want >>> to avoid the extra IO overhead and storage of writing out pages >>> uneccessarily. >>> >>> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> >>> Reviewed-by: Barry Song <v-songbaohua@xxxxxxxx> >>> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> >>> --- >>> mm/vmscan.c | 9 +++++---- >>> 1 file changed, 5 insertions(+), 4 deletions(-) >>> >>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>> index 00adaf1cb2c3..293120fe54f3 100644 >>> --- a/mm/vmscan.c >>> +++ b/mm/vmscan.c >>> @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, >>> if (!can_split_folio(folio, NULL)) >>> goto activate_locked; >>> /* >>> - * Split folios without a PMD map right >>> - * away. Chances are some or all of the >>> - * tail pages can be freed without IO. >>> + * Split partially mapped folios right >>> + * away. We can free the unmapped pages >>> + * without IO. >>> */ >>> - if (!folio_entire_mapcount(folio) && >>> + if (data_race(!list_empty( >>> + &folio->_deferred_list)) && >>> split_folio_to_list(folio, >>> folio_list)) >>> goto activate_locked; >> >> Hi Ryan, >> >> Sorry for bringing up another minor issue at this late stage. > > No problem - I'd rather take a bit longer and get it right, rather than rush it > and get it wrong! > >> >> During the debugging of thp counter patch v2, I noticed the discrepancy between >> THP_SWPOUT_FALLBACK and THP_SWPOUT. >> >> Should we make adjustments to the counter? > > Yes, agreed; we want to be consistent here with all the other existing THP > counters; they only refer to PMD-sized THP. I'll make the change for the next > version. > > I guess we will eventually want equivalent counters for per-size mTHP using the > framework you are adding. > >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 293120fe54f3..d7856603f689 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1241,8 +1241,10 @@ static unsigned int shrink_folio_list(struct >> list_head *folio_list, >> folio_list)) >> goto activate_locked; >> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >> - >> count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); >> - count_vm_event(THP_SWPOUT_FALLBACK); >> + if (folio_test_pmd_mappable(folio)) { This doesn't quite work because we have already split the folio here, so this will always return false. I've changed it to: if (nr_pages >= HPAGE_PMD_NR) { >> + >> count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); >> + >> count_vm_event(THP_SWPOUT_FALLBACK); >> + } >> #endif >> if (!add_to_swap(folio)) >> goto activate_locked_split; >> >> >> Because THP_SWPOUT is only for pmd: >> >> static inline void count_swpout_vm_event(struct folio *folio) >> { >> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >> if (unlikely(folio_test_pmd_mappable(folio))) { >> count_memcg_folio_events(folio, THP_SWPOUT, 1); >> count_vm_event(THP_SWPOUT); >> } >> #endif >> count_vm_events(PSWPOUT, folio_nr_pages(folio)); >> } >> >> I can provide per-order counters for this in my THP counter patch. >> >>> -- >>> 2.25.1 >>> >> >> Thanks >> Barry >