The patch titled Subject: mm: reduce lock contention of pcp buffer refill has been added to the -mm mm-unstable branch. Its filename is mm-reduce-lock-contention-of-pcp-buffer-refill.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-reduce-lock-contention-of-pcp-buffer-refill.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Alexander Halbuer <halbuer@xxxxxxxxxxxxxxxxxxx> Subject: mm: reduce lock contention of pcp buffer refill Date: Wed, 1 Feb 2023 17:25:49 +0100 rmqueue_bulk() batches the allocation of multiple elements to refill the per-CPU buffers into a single hold of the zone lock. Each element is allocated and checked using check_pcp_refill(). The check touches every related struct page which is especially expensive for higher order allocations (huge pages). This patch reduces the time holding the lock by moving the check out of the critical section similar to rmqueue_buddy() which allocates a single element. Measurements of parallel allocation-heavy workloads show a reduction of the average huge page allocation latency of 50 percent for two cores and nearly 90 percent for 24 cores. Link: https://lkml.kernel.org/r/20230201162549.68384-1-halbuer@xxxxxxxxxxxxxxxxxxx Signed-off-by: Alexander Halbuer <halbuer@xxxxxxxxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/page_alloc.c~mm-reduce-lock-contention-of-pcp-buffer-refill +++ a/mm/page_alloc.c @@ -3140,6 +3140,8 @@ static int rmqueue_bulk(struct zone *zon { unsigned long flags; int i, allocated = 0; + struct list_head *prev_tail = list->prev; + struct page *pos, *n; spin_lock_irqsave(&zone->lock, flags); for (i = 0; i < count; ++i) { @@ -3148,9 +3150,6 @@ static int rmqueue_bulk(struct zone *zon if (unlikely(page == NULL)) break; - if (unlikely(check_pcp_refill(page, order))) - continue; - /* * Split buddy pages returned by expand() are received here in * physical page order. The page is added to the tail of @@ -3162,7 +3161,6 @@ static int rmqueue_bulk(struct zone *zon * pages are ordered properly. */ list_add_tail(&page->pcp_list, list); - allocated++; if (is_migrate_cma(get_pcppage_migratetype(page))) __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, -(1 << order)); @@ -3176,6 +3174,22 @@ static int rmqueue_bulk(struct zone *zon */ __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order)); spin_unlock_irqrestore(&zone->lock, flags); + + /* + * Pages are appended to the pcp list without checking to reduce the + * time holding the zone lock. Checking the appended pages happens right + * after the critical section while still holding the pcp lock. + */ + pos = list_first_entry(prev_tail, struct page, pcp_list); + list_for_each_entry_safe_from(pos, n, list, pcp_list) { + if (unlikely(check_pcp_refill(pos, order))) { + list_del(&pos->pcp_list); + continue; + } + + allocated++; + } + return allocated; } _ Patches currently in -mm which might be from halbuer@xxxxxxxxxxxxxxxxxxx are mm-reduce-lock-contention-of-pcp-buffer-refill.patch