Re: [PATCH 1/2] mm: compaction: consider the number of scanning compound pages in isolate fail path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 3/15/2023 11:54 PM, Vlastimil Babka wrote:
On 3/13/23 11:37, Baolin Wang wrote:
The commit b717d6b93b54 ("mm: compaction: include compound page count
for scanning in pageblock isolation") had added compound page statistics
for scanning in pageblock isolation, to make sure the number of scanned
pages are always larger than the number of isolated pages when isolating
mirgratable or free pageblock.

However, when failed to isolate the pages when scanning the mirgratable or
free pageblock, the isolation failure path did not consider the scanning
statistics of the compound pages, which can show the incorrect number of
scanned pages in tracepoints or the vmstats to make people confusing about
the page scanning pressure in memory compaction.

Thus we should take into account the number of scanning pages when failed
to isolate the compound pages to make the statistics accurate.

Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
  mm/compaction.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 5a9501e0ae01..c9d9ad958e2a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -587,6 +587,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
  				blockpfn += (1UL << order) - 1;
  				cursor += (1UL << order) - 1;
  			}
+			nr_scanned += (1UL << order) - 1;

I'd rather put it in the block above that tests order < MAX_ORDER. Otherwise
as the comments say, the value can be bogus as it's racy.

Right, thanks for pointing it out. Will do in next version.


  			goto isolate_fail;
  		}
@@ -873,9 +874,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
  			cond_resched();
  		}
- nr_scanned++;
-
  		page = pfn_to_page(low_pfn);
+		nr_scanned += compound_nr(page);

For the same reason, I'd rather leave the nr_scanned adjustment by order in
the specific code blocks where we know/think we have a compound or huge page
and have sanity checked the order/nr_pages, and not add an unchecked
compound_nr() here.

OK. Sound reasonable to me. Thanks for your input.

  		/*
  		 * Check if the pageblock has already been marked skipped.
@@ -1077,6 +1077,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
  			 */
  			if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
  				low_pfn += compound_nr(page) - 1;
+				nr_scanned += compound_nr(page) - 1;
  				SetPageLRU(page);
  				goto isolate_fail_put;
  			}
@@ -1097,7 +1098,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
  isolate_success_no_list:
  		cc->nr_migratepages += compound_nr(page);
  		nr_isolated += compound_nr(page);
-		nr_scanned += compound_nr(page) - 1;
/*
  		 * Avoid isolating too much unless this block is being




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux