On Wed, Jan 24, 2018 at 10:30:49AM +0800, Aaron Lu wrote: > When freeing a batch of pages from Per-CPU-Pages(PCP) back to buddy, > the zone->lock is held and then pages are chosen from PCP's migratetype > list. While there is actually no need to do this 'choose part' under > lock since it's PCP pages, the only CPU that can touch them is us and > irq is also disabled. > > Moving this part outside could reduce lock held time and improve > performance. Test with will-it-scale/page_fault1 full load: > > kernel Broadwell(2S) Skylake(2S) Broadwell(4S) Skylake(4S) > v4.15-rc4 9037332 8000124 13642741 15728686 > this patch 9608786 +6.3% 8368915 +4.6% 14042169 +2.9% 17433559 +10.8% > > What the test does is: starts $nr_cpu processes and each will repeated > do the following for 5 minutes: > 1 mmap 128M anonymouse space; > 2 write access to that space; > 3 munmap. > The score is the aggregated iteration. > > https://github.com/antonblanchard/will-it-scale/blob/master/tests/page_fault1.c > > Signed-off-by: Aaron Lu <aaron.lu@xxxxxxxxx> > --- > mm/page_alloc.c | 33 +++++++++++++++++++-------------- > 1 file changed, 19 insertions(+), 14 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4093728f292e..a076f754dac1 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1113,12 +1113,12 @@ static void free_pcppages_bulk(struct zone *zone, int count, > int migratetype = 0; > int batch_free = 0; > bool isolated_pageblocks; > + struct list_head head; > + struct page *page, *tmp; > > - spin_lock(&zone->lock); > - isolated_pageblocks = has_isolate_pageblock(zone); > + INIT_LIST_HEAD(&head); > Declare head as LIST_HEAD(head) and avoid INIT_LIST_HEAD. Otherwise I think this is safe Acked-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>