From: Barry Song <v-songbaohua@xxxxxxxx> When the proportion of folios from the zero map is small, missing their accounting may not significantly impact profiling. However, it’s easy to construct a scenario where this becomes an issue—for example, allocating 1 GB of memory, writing zeros from userspace, followed by MADV_PAGEOUT, and then swapping it back in. In this case, the swap-out and swap-in counts seem to vanish into a black hole, potentially causing semantic ambiguity. We have two ways to address this: 1. Add a separate counter specifically for the zero map. 2. Continue using the current accounting, treating the zero map like a normal backend. (This aligns with the current behavior of zRAM when supporting same-page fills at the device level.) This patch adopts option 2. I'm curious if others have different opinions, so I'm marking it as RFC. Fixes: 0ca0c24e3211 ("mm: store zero pages to be swapped out in a bitmap") Cc: Usama Arif <usamaarif642@xxxxxxxxx> Cc: Chengming Zhou <chengming.zhou@xxxxxxxxx> Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Cc: Nhat Pham <nphamcs@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Shakeel Butt <shakeel.butt@xxxxxxxxx> Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Chris Li <chrisl@xxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Kairui Song <kasong@xxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> --- mm/page_io.c | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/mm/page_io.c b/mm/page_io.c index 5d9b6e6cf96c..90c5ea870038 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -226,6 +226,19 @@ static void swap_zeromap_folio_clear(struct folio *folio) } } +static inline void count_swpout_vm_event(struct folio *folio) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (unlikely(folio_test_pmd_mappable(folio))) { + count_memcg_folio_events(folio, THP_SWPOUT, 1); + count_vm_event(THP_SWPOUT); + } +#endif + count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT); + count_memcg_folio_events(folio, PSWPOUT, folio_nr_pages(folio)); + count_vm_events(PSWPOUT, folio_nr_pages(folio)); +} + /* * We may have stale swap cache pages in memory: notice * them here and get rid of the unnecessary final write. @@ -258,6 +271,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) */ if (is_folio_zero_filled(folio)) { swap_zeromap_folio_set(folio); + count_swpout_vm_event(folio); folio_unlock(folio); return 0; } else { @@ -282,19 +296,6 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) return 0; } -static inline void count_swpout_vm_event(struct folio *folio) -{ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (unlikely(folio_test_pmd_mappable(folio))) { - count_memcg_folio_events(folio, THP_SWPOUT, 1); - count_vm_event(THP_SWPOUT); - } -#endif - count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT); - count_memcg_folio_events(folio, PSWPOUT, folio_nr_pages(folio)); - count_vm_events(PSWPOUT, folio_nr_pages(folio)); -} - #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) static void bio_associate_blkg_from_page(struct bio *bio, struct folio *folio) { @@ -621,6 +622,9 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug) delayacct_swapin_start(); if (swap_read_folio_zeromap(folio)) { + count_mthp_stat(folio_order(folio), MTHP_STAT_SWPIN); + count_memcg_folio_events(folio, PSWPIN, folio_nr_pages(folio)); + count_vm_events(PSWPIN, folio_nr_pages(folio)); folio_unlock(folio); goto finish; } else if (zswap_load(folio)) { -- 2.39.3 (Apple Git-146)