The patch titled Subject: mm/page_alloc: track range of active PCP lists during bulk free has been added to the -mm tree. Its filename is mm-page_alloc-track-range-of-active-pcp-lists-during-bulk-free.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-track-range-of-active-pcp-lists-during-bulk-free.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-track-range-of-active-pcp-lists-during-bulk-free.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Subject: mm/page_alloc: track range of active PCP lists during bulk free free_pcppages_bulk() frees pages in a round-robin fashion. Originally, this was dealing only with migratetypes but storing high-order pages means that there can be many more empty lists that are uselessly checked. Track the minimum and maximum active pindex to reduce the search space. Link: https://lkml.kernel.org/r/20220217002227.5739-3-mgorman@xxxxxxxxxxxxxxxxxxx Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Aaron Lu <aaron.lu@xxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-track-range-of-active-pcp-lists-during-bulk-free +++ a/mm/page_alloc.c @@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zo struct per_cpu_pages *pcp) { int pindex = 0; + int min_pindex = 0; + int max_pindex = NR_PCP_LISTS - 1; int batch_free = 0; int nr_freed = 0; unsigned int order; @@ -1475,13 +1477,20 @@ static void free_pcppages_bulk(struct zo */ do { batch_free++; - if (++pindex == NR_PCP_LISTS) - pindex = 0; + if (++pindex > max_pindex) + pindex = min_pindex; list = &pcp->lists[pindex]; - } while (list_empty(list)); + if (!list_empty(list)) + break; + + if (pindex == max_pindex) + max_pindex--; + if (pindex == min_pindex) + min_pindex++; + } while (1); /* This is the only non-empty list. Free them all. */ - if (batch_free == NR_PCP_LISTS) + if (batch_free >= max_pindex - min_pindex) batch_free = count; order = pindex_to_order(pindex); _ Patches currently in -mm which might be from mgorman@xxxxxxxxxxxxxxxxxxx are mm-page_alloc-fetch-the-correct-pcp-buddy-during-bulk-free.patch mm-page_alloc-track-range-of-active-pcp-lists-during-bulk-free.patch mm-page_alloc-simplify-how-many-pages-are-selected-per-pcp-list-during-bulk-free.patch mm-page_alloc-drain-the-requested-list-first-during-bulk-free.patch mm-page_alloc-free-pages-in-a-single-pass-during-bulk-free.patch mm-page_alloc-limit-number-of-high-order-pages-on-pcp-during-bulk-free.patch