On 03/13/2017 09:31 AM, Vlastimil Babka wrote: > On 03/13/2017 09:02 AM, zhongjiang wrote: >> From: zhong jiang <zhongjiang@xxxxxxxxxx> >> >> when commit 374ad05ab64d ("mm, page_alloc: only use per-cpu allocator for irq-safe requests") >> introduced to the mainline, free_pcppages_bulk irq_save/resave to protect >> the IRQ context. but drain_pages_zone fails to clear away the irq. because >> preempt_disable have take effect. so it safely remove the code. >> >> Fixes: 374ad05ab64d ("mm, page_alloc: only use per-cpu allocator for irq-safe requests") >> Signed-off-by: zhong jiang <zhongjiang@xxxxxxxxxx> >> --- >> mm/page_alloc.c | 3 --- >> 1 file changed, 3 deletions(-) >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 05c3956..7b16095 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -2294,11 +2294,9 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) >> */ >> static void drain_pages_zone(unsigned int cpu, struct zone *zone) >> { >> - unsigned long flags; >> struct per_cpu_pageset *pset; >> struct per_cpu_pages *pcp; >> >> - local_irq_save(flags); >> pset = per_cpu_ptr(zone->pageset, cpu); > > NAK. we have to make sure that pset corresponds to the cpu we are > running on. Sorry, I was thinking about other callers, such as drain_local_pages(), but seems like all have the cpu pinned prerequisity. But do we know that there can't be an interrupt updating pcp->count between the moment we read it and we call free_pcppages_bulk? This should be also discussed in the changelog. Also the "Fixes:" tag is not necessary for a performance optimization. > >> >> pcp = &pset->pcp; >> @@ -2306,7 +2304,6 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) >> free_pcppages_bulk(zone, pcp->count, pcp); >> pcp->count = 0; >> } >> - local_irq_restore(flags); >> } >> >> /* >> > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>