The patch titled Subject: mm, compaction: shrink compact_control has been added to the -mm tree. Its filename is mm-compaction-shrink-compact_control.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-compaction-shrink-compact_control.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-compaction-shrink-compact_control.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Subject: mm, compaction: shrink compact_control Patch series "Increase success rates and reduce latency of compaction", v2. This series reduces scan rates and success rates of compaction, primarily by using the free lists to shorten scans, better controlling of skip information and whether multiple scanners can target the same block and capturing pageblocks before being stolen by parallel requests. The series is based on the 4.21/5.0 merge window after Andrew's tree had been merged. It's known to rebase cleanly. Primarily I'm using thpscale to measure the impact of the series. The benchmark creates a large file, maps it, faults it, punches holes in the mapping so that the virtual address space is fragmented and then tries to allocate THP. It re-executes for different numbers of threads. From a fragmentation perspective, the workload is relatively benign but it does stress compaction. The overall impact on latencies for a 1-socket machine is baseline patches Amean fault-both-3 5362.80 ( 0.00%) 4446.89 * 17.08%* Amean fault-both-5 9488.75 ( 0.00%) 5660.86 * 40.34%* Amean fault-both-7 11909.86 ( 0.00%) 8549.63 * 28.21%* Amean fault-both-12 16185.09 ( 0.00%) 11508.36 * 28.90%* Amean fault-both-18 12057.72 ( 0.00%) 19013.48 * -57.69%* Amean fault-both-24 23939.95 ( 0.00%) 19676.16 * 17.81%* Amean fault-both-30 26606.14 ( 0.00%) 27363.23 ( -2.85%) Amean fault-both-32 31677.12 ( 0.00%) 23154.09 * 26.91%* While there is a glitch at the 18-thread mark, it's known that the base page allocation latency was much lower and huge pages were taking longer -- partially due a high allocation success rate. The allocation success rates are much improved baseline patches Percentage huge-3 70.93 ( 0.00%) 98.30 ( 38.60%) Percentage huge-5 56.02 ( 0.00%) 83.36 ( 48.81%) Percentage huge-7 60.98 ( 0.00%) 89.04 ( 46.01%) Percentage huge-12 73.02 ( 0.00%) 94.36 ( 29.23%) Percentage huge-18 94.37 ( 0.00%) 95.87 ( 1.58%) Percentage huge-24 84.95 ( 0.00%) 97.41 ( 14.67%) Percentage huge-30 83.63 ( 0.00%) 96.69 ( 15.62%) Percentage huge-32 81.69 ( 0.00%) 96.10 ( 17.65%) That's a nearly perfect allocation success rate. The biggest impact is on the scan rates Compaction migrate scanned 106520811 26934599 Compaction free scanned 4180735040 26584944 The number of pages scanned for migration was reduced by 74% and the free scanner was reduced by 99.36%. So much less work in exchange for lower latency and better success rates. The series was also evaluated using a workload that heavily fragments memory but the benefits there are also significant, albeit not presented. It was commented that we should be rethinking scanning entirely and to a large extent I agree. However, to achieve that you need a lot of this series in place first so it's best to make the linear scanners as best as possible before ripping them out. This patch (of 25): The isolate and migrate scanners should never isolate more than a pageblock of pages so unsigned int is sufficient saving 8 bytes on a 64-bit build. Link: http://lkml.kernel.org/r/20190104125011.16071-2-mgorman@xxxxxxxxxxxxxxxxxxx Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/internal.h~mm-compaction-shrink-compact_control +++ a/mm/internal.h @@ -185,8 +185,8 @@ struct compact_control { struct list_head freepages; /* List of free pages to migrate to */ struct list_head migratepages; /* List of pages being migrated */ struct zone *zone; - unsigned long nr_freepages; /* Number of isolated free pages */ - unsigned long nr_migratepages; /* Number of pages to migrate */ + unsigned int nr_freepages; /* Number of isolated free pages */ + unsigned int nr_migratepages; /* Number of pages to migrate */ unsigned long total_migrate_scanned; unsigned long total_free_scanned; unsigned long free_pfn; /* isolate_freepages search base */ _ Patches currently in -mm which might be from mgorman@xxxxxxxxxxxxxxxxxxx are mm-page_alloc-do-not-wake-kswapd-with-zone-lock-held.patch mm-compaction-shrink-compact_control.patch mm-compaction-rearrange-compact_control.patch mm-compaction-remove-last_migrated_pfn-from-compact_control.patch mm-compaction-remove-unnecessary-zone-parameter-in-some-instances.patch mm-compaction-rename-map_pages-to-split_map_pages.patch mm-compaction-skip-pageblocks-with-reserved-pages.patch mm-migrate-immediately-fail-migration-of-a-page-with-no-migration-handler.patch mm-compaction-always-finish-scanning-of-a-full-pageblock.patch mm-compaction-use-the-page-allocator-bulk-free-helper-for-lists-of-pages.patch mm-compaction-ignore-the-fragmentation-avoidance-boost-for-isolation-and-compaction.patch mm-compaction-use-free-lists-to-quickly-locate-a-migration-source.patch mm-compaction-keep-migration-source-private-to-a-single-compaction-instance.patch mm-compaction-use-free-lists-to-quickly-locate-a-migration-target.patch mm-compaction-avoid-rescanning-the-same-pageblock-multiple-times.patch mm-compaction-finish-pageblock-scanning-on-contention.patch mm-compaction-check-early-for-huge-pages-encountered-by-the-migration-scanner.patch mm-compaction-keep-cached-migration-pfns-synced-for-unusable-pageblocks.patch mm-compaction-rework-compact_should_abort-as-compact_check_resched.patch mm-compaction-do-not-consider-a-need-to-reschedule-as-contention.patch mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner.patch mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target.patch mm-compaction-sample-pageblocks-for-free-pages.patch mm-compaction-be-selective-about-what-pageblocks-to-clear-skip-hints.patch mm-compaction-capture-a-page-under-direct-compaction.patch mm-compaction-do-not-direct-compact-remote-memory.patch