On 1/18/19 6:51 PM, Mel Gorman wrote: > Compaction is inherently race-prone as a suitable page freed during > compaction can be allocated by any parallel task. This patch uses a > capture_control structure to isolate a page immediately when it is freed > by a direct compactor in the slow path of the page allocator. The intent > is to avoid redundant scanning. > > 5.0.0-rc1 5.0.0-rc1 > selective-v3r17 capture-v3r19 > Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%* > Amean fault-both-3 2582.11 ( 0.00%) 2563.68 ( 0.71%) > Amean fault-both-5 4500.26 ( 0.00%) 4233.52 ( 5.93%) > Amean fault-both-7 5819.53 ( 0.00%) 6333.65 ( -8.83%) > Amean fault-both-12 9321.18 ( 0.00%) 9759.38 ( -4.70%) > Amean fault-both-18 9782.76 ( 0.00%) 10338.76 ( -5.68%) > Amean fault-both-24 15272.81 ( 0.00%) 13379.55 * 12.40%* > Amean fault-both-30 15121.34 ( 0.00%) 16158.25 ( -6.86%) > Amean fault-both-32 18466.67 ( 0.00%) 18971.21 ( -2.73%) > > Latency is only moderately affected but the devil is in the details. > A closer examination indicates that base page fault latency is reduced > but latency of huge pages is increased as it takes creater care to > succeed. Part of the "problem" is that allocation success rates are close > to 100% even when under pressure and compaction gets harder > > 5.0.0-rc1 5.0.0-rc1 > selective-v3r17 capture-v3r19 > Percentage huge-3 96.70 ( 0.00%) 98.23 ( 1.58%) > Percentage huge-5 96.99 ( 0.00%) 95.30 ( -1.75%) > Percentage huge-7 94.19 ( 0.00%) 97.24 ( 3.24%) > Percentage huge-12 94.95 ( 0.00%) 97.35 ( 2.53%) > Percentage huge-18 96.74 ( 0.00%) 97.30 ( 0.58%) > Percentage huge-24 97.07 ( 0.00%) 97.55 ( 0.50%) > Percentage huge-30 95.69 ( 0.00%) 98.50 ( 2.95%) > Percentage huge-32 96.70 ( 0.00%) 99.27 ( 2.65%) > > And scan rates are reduced as expected by 6% for the migration scanner > and 29% for the free scanner indicating that there is less redundant work. > > Compaction migrate scanned 20815362 19573286 > Compaction free scanned 16352612 11510663 > > Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Nit below: ... > @@ -819,6 +870,7 @@ static inline void __free_one_page(struct page *page, > unsigned long uninitialized_var(buddy_pfn); > struct page *buddy; > unsigned int max_order; > + struct capture_control *capc = task_capc(zone); > > max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); > > @@ -834,6 +886,12 @@ static inline void __free_one_page(struct page *page, > > continue_merging: > while (order < max_order - 1) { > + if (compaction_capture(capc, page, order, migratetype)) { > + if (likely(!is_migrate_isolate(migratetype))) compaction_capture() won't act on isolated migratetype, so this check is unnecessary? > + __mod_zone_freepage_state(zone, -(1 << order), > + migratetype); > + return; > + } > buddy_pfn = __find_buddy_pfn(pfn, order); > buddy = page + (buddy_pfn - pfn); >