On 2/15/22 15:51, Mel Gorman wrote: > free_pcppages_bulk() frees pages in a round-robin fashion. Originally, > this was dealing only with migratetypes but storing high-order pages > means that there can be many more empty lists that are uselessly > checked. Track the minimum and maximum active pindex to reduce the > search space. > > Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > --- > mm/page_alloc.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 08de32cfd9bb..c5110fdeb115 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zone *zone, int count, > struct per_cpu_pages *pcp) > { > int pindex = 0; > + int min_pindex = 0; > + int max_pindex = NR_PCP_LISTS - 1; > int batch_free = 0; > int nr_freed = 0; > unsigned int order; > @@ -1478,10 +1480,17 @@ static void free_pcppages_bulk(struct zone *zone, int count, > if (++pindex == NR_PCP_LISTS) Hmm, so in the very first iteration at this point pindex is already 1. This looks odd even before the patch, as order 0 MIGRATE_UNMOVABLE list is only processed after all the higher orders? > pindex = 0; Also shouldn't this wrap-around check also use min_index/max_index instead of NR_PCP_LISTS and 0? > list = &pcp->lists[pindex]; > - } while (list_empty(list)); > + if (!list_empty(list)) > + break; > + > + if (pindex == max_pindex) > + max_pindex--; > + if (pindex == min_pindex) So with pindex 1 and min_pindex == 0 this will not trigger until (eventually) the first pindex wrap around, which seems suboptimal. But I can see the later patches change things substantially anyway so it may be moot... > + min_pindex++; > + } while (1); > > /* This is the only non-empty list. Free them all. */ > - if (batch_free == NR_PCP_LISTS) > + if (batch_free >= max_pindex - min_pindex) > batch_free = count; > > order = pindex_to_order(pindex);