From: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Subject: mm, compaction: do not consider a need to reschedule as contention Scanning on large machines can take a considerable length of time and eventually need to be rescheduled. This is treated as an abort event but that's not appropriate as the attempt is likely to be retried after making numerous checks and taking another cycle through the page allocator. This patch will check the need to reschedule if necessary but continue the scanning. The main benefit is reduced scanning when compaction is taking a long time or the machine is over-saturated. It also avoids an unnecessary exit of compaction that ends up being retried by the page allocator in the outer loop. 5.0.0-rc1 5.0.0-rc1 synccached-v3r16 noresched-v3r17 Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%* Amean fault-both-3 2958.27 ( 0.00%) 2965.68 ( -0.25%) Amean fault-both-5 4091.90 ( 0.00%) 3995.90 ( 2.35%) Amean fault-both-7 5803.05 ( 0.00%) 5842.12 ( -0.67%) Amean fault-both-12 9481.06 ( 0.00%) 9550.87 ( -0.74%) Amean fault-both-18 14141.51 ( 0.00%) 13304.72 ( 5.92%) Amean fault-both-24 16438.00 ( 0.00%) 14618.59 ( 11.07%) Amean fault-both-30 17531.72 ( 0.00%) 16650.96 ( 5.02%) Amean fault-both-32 17101.96 ( 0.00%) 17145.15 ( -0.25%) Link: http://lkml.kernel.org/r/20190118175136.31341-18-mgorman@xxxxxxxxxxxxxxxxxxx Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Dan Carpenter <dan.carpenter@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: YueHaibing <yuehaibing@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/compaction.c | 23 ++++------------------- 1 file changed, 4 insertions(+), 19 deletions(-) --- a/mm/compaction.c~mm-compaction-do-not-consider-a-need-to-reschedule-as-contention +++ a/mm/compaction.c @@ -405,21 +405,6 @@ static bool compact_lock_irqsave(spinloc } /* - * Aside from avoiding lock contention, compaction also periodically checks - * need_resched() and records async compaction as contended if necessary. - */ -static inline void compact_check_resched(struct compact_control *cc) -{ - /* async compaction aborts if contended */ - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - - cond_resched(); - } -} - -/* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid * having disabled IRQs for a long time, even when there is nobody waiting on @@ -447,7 +432,7 @@ static bool compact_unlock_should_abort( return true; } - compact_check_resched(cc); + cond_resched(); return false; } @@ -736,7 +721,7 @@ isolate_migratepages_block(struct compac return 0; } - compact_check_resched(cc); + cond_resched(); if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) { skip_on_failure = true; @@ -1370,7 +1355,7 @@ static void isolate_freepages(struct com * suitable migration targets, so periodically check resched. */ if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) - compact_check_resched(cc); + cond_resched(); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); @@ -1666,7 +1651,7 @@ static isolate_migrate_t isolate_migrate * need to schedule. */ if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) - compact_check_resched(cc); + cond_resched(); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); _