On 03/01/2018 07:28 AM, Aaron Lu wrote: > Matthew Wilcox found that all callers of free_pcppages_bulk() currently > update pcp->count immediately after so it's natural to do it inside > free_pcppages_bulk(). > > No functionality or performance change is expected from this patch. Well, it's N decrements instead of one decrement by N / assignment of zero. But I assume the difference is negligible anyway, right? > Suggested-by: Matthew Wilcox <willy@xxxxxxxxxxxxx> > Signed-off-by: Aaron Lu <aaron.lu@xxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > mm/page_alloc.c | 10 +++------- > 1 file changed, 3 insertions(+), 7 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index cb416723538f..faa33eac1635 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1148,6 +1148,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, > page = list_last_entry(list, struct page, lru); > /* must delete as __free_one_page list manipulates */ > list_del(&page->lru); > + pcp->count--; > > mt = get_pcppage_migratetype(page); > /* MIGRATE_ISOLATE page should not go to pcplists */ > @@ -2416,10 +2417,8 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) > local_irq_save(flags); > batch = READ_ONCE(pcp->batch); > to_drain = min(pcp->count, batch); > - if (to_drain > 0) { > + if (to_drain > 0) > free_pcppages_bulk(zone, to_drain, pcp); > - pcp->count -= to_drain; > - } > local_irq_restore(flags); > } > #endif > @@ -2441,10 +2440,8 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) > pset = per_cpu_ptr(zone->pageset, cpu); > > pcp = &pset->pcp; > - if (pcp->count) { > + if (pcp->count) > free_pcppages_bulk(zone, pcp->count, pcp); > - pcp->count = 0; > - } > local_irq_restore(flags); > } > > @@ -2668,7 +2665,6 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) > if (pcp->count >= pcp->high) { > unsigned long batch = READ_ONCE(pcp->batch); > free_pcppages_bulk(zone, batch, pcp); > - pcp->count -= batch; > } > } > >