On Thu, Oct 21, 2021 at 03:33:16AM +0100, Matthew Wilcox wrote: > On Wed, Oct 20, 2021 at 06:08:42PM -0700, akpm@xxxxxxxxxxxxxxxxxxxx wrote: > > From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > > Subject: restore-acct_reclaim_writeback-for-folio > > > > Make Mel's "mm/vmscan: throttle reclaim and compaction when too may pages > > are isolated" work for folio changes. > > Mmm. acct_reclaim_writeback() is going to need to be converted to > folios -- it accounts a page as a single page instead of as however > many pages it contains. > > This patch makes sense to apply, so this is just a note that there's > a fuller fixup to come later. Later seems to be now. This patch compiles. The only non-mechanical change in here is changing inc_node_page_state() to node_stat_add_folio() which accounts the number of pages in the folio instead of 1. diff --git a/mm/filemap.c b/mm/filemap.c index 6844c9816a86..daa0e23a6ee6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1607,7 +1607,7 @@ void folio_end_writeback(struct folio *folio) smp_mb__after_atomic(); folio_wake(folio, PG_writeback); - acct_reclaim_writeback(folio_page(folio, 0)); + acct_reclaim_writeback(folio); folio_put(folio); } EXPORT_SYMBOL(folio_end_writeback); diff --git a/mm/internal.h b/mm/internal.h index 632c55c5a075..3b79a5c9427a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -41,15 +41,15 @@ static inline void *folio_raw_mapping(struct folio *folio) return (void *)(mapping & ~PAGE_MAPPING_FLAGS); } -void __acct_reclaim_writeback(pg_data_t *pgdat, struct page *page, +void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, int nr_throttled); -static inline void acct_reclaim_writeback(struct page *page) +static inline void acct_reclaim_writeback(struct folio *folio) { - pg_data_t *pgdat = page_pgdat(page); + pg_data_t *pgdat = folio_pgdat(folio); int nr_throttled = atomic_read(&pgdat->nr_writeback_throttled); if (nr_throttled) - __acct_reclaim_writeback(pgdat, page, nr_throttled); + __acct_reclaim_writeback(pgdat, folio, nr_throttled); } static inline void wake_throttle_isolated(pg_data_t *pgdat) diff --git a/mm/vmscan.c b/mm/vmscan.c index 59c07ee4220d..fb9584641ac7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1085,12 +1085,12 @@ void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason) * pages to clean. If enough pages have been cleaned since throttling * started then wakeup the throttled tasks. */ -void __acct_reclaim_writeback(pg_data_t *pgdat, struct page *page, +void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, int nr_throttled) { unsigned long nr_written; - inc_node_page_state(page, NR_THROTTLED_WRITTEN); + node_stat_add_folio(folio, NR_THROTTLED_WRITTEN); /* * This is an inaccurate read as the per-cpu deltas may not