The patch titled Subject: mm, page_owner: don't grab zone->lock for init_pages_in_zone() has been added to the -mm tree. Its filename is mm-page_owner-dont-grab-zone-lock-for-init_pages_in_zone.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-page_owner-dont-grab-zone-lock-for-init_pages_in_zone.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-page_owner-dont-grab-zone-lock-for-init_pages_in_zone.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, page_owner: don't grab zone->lock for init_pages_in_zone() init_pages_in_zone() is run under zone->lock, which means a long lock time and disabled interrupts on large machines. This is currently not an issue since it runs early in boot, but a later patch will change that. However, like other pfn scanners, we don't actually need zone->lock even when other cpus are running. The only potentially dangerous operation here is reading bogus buddy page owner due to race, and we already know how to handle that. The worse that can happen is that we skip some early allocated pages, which should not affect the debugging power of page_owner noticeably. Link: http://lkml.kernel.org/r/20170720134029.25268-4-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Yang Shi <yang.shi@xxxxxxxxxx> Cc: Laura Abbott <labbott@xxxxxxxxxx> Cc: Vinayak Menon <vinmenon@xxxxxxxxxxxxxx> Cc: zhong jiang <zhongjiang@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_owner.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff -puN mm/page_owner.c~mm-page_owner-dont-grab-zone-lock-for-init_pages_in_zone mm/page_owner.c --- a/mm/page_owner.c~mm-page_owner-dont-grab-zone-lock-for-init_pages_in_zone +++ a/mm/page_owner.c @@ -567,11 +567,17 @@ static void init_pages_in_zone(pg_data_t continue; /* - * We are safe to check buddy flag and order, because - * this is init stage and only single thread runs. + * To avoid having to grab zone->lock, be a little + * careful when reading buddy page order. The only + * danger is that we skip too much and potentially miss + * some early allocated pages, which is better than + * heavy lock contention. */ if (PageBuddy(page)) { - pfn += (1UL << page_order(page)) - 1; + unsigned long order = page_order_unsafe(page); + + if (order > 0 && order < MAX_ORDER) + pfn += (1UL << order) - 1; continue; } @@ -590,6 +596,7 @@ static void init_pages_in_zone(pg_data_t __set_page_owner_init(page_ext, init_handle); count++; } + cond_resched(); } pr_info("Node %d, zone %8s: page owner found early allocated %lu pages\n", @@ -600,15 +607,12 @@ static void init_zones_in_node(pg_data_t { struct zone *zone; struct zone *node_zones = pgdat->node_zones; - unsigned long flags; for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) { if (!populated_zone(zone)) continue; - spin_lock_irqsave(&zone->lock, flags); init_pages_in_zone(pgdat, zone); - spin_unlock_irqrestore(&zone->lock, flags); } } _ Patches currently in -mm which might be from vbabka@xxxxxxx are mm-page_owner-make-init_pages_in_zone-faster.patch mm-page_ext-periodically-reschedule-during-page_ext_init.patch mm-page_owner-dont-grab-zone-lock-for-init_pages_in_zone.patch mm-page_ext-move-page_ext_init-after-page_alloc_init_late.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html