The patch titled Subject: mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4 has been added to the -mm tree. Its filename is mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Aaron Lu <aaron.lu@xxxxxxxxx> Subject: mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4 Link: http://lkml.kernel.org/r/20180301062845.26038-4-aaron.lu@xxxxxxxxx Link: http://lkml.kernel.org/r/20180309082431.GB30868@xxxxxxxxx Signed-off-by: Aaron Lu <aaron.lu@xxxxxxxxx> Cc: Ying Huang <ying.huang@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff -puN mm/page_alloc.c~mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4 mm/page_alloc.c --- a/mm/page_alloc.c~mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4 +++ a/mm/page_alloc.c @@ -1116,15 +1116,19 @@ static void free_pcppages_bulk(struct zo if (bulkfree_pcp_prepare(page)) continue; - list_add_tail(&page->lru, &head); + list_add(&page->lru, &head); /* * We are going to put the page back to the global * pool, prefetch its buddy to speed up later access * under zone->lock. It is believed the overhead of - * calculating buddy_pfn here can be offset by reduced - * memory latency later. + * an additional test and calculating buddy_pfn here + * can be offset by reduced memory latency later. To + * avoid excessive prefetching due to large count, only + * prefetch buddy for the last pcp->batch nr of pages. */ + if (count > pcp->batch) + continue; pfn = page_to_pfn(page); buddy_pfn = __find_buddy_pfn(pfn, 0); buddy = page + (buddy_pfn - pfn); _ Patches currently in -mm which might be from aaron.lu@xxxxxxxxx are mm-free_pcppages_bulk-update-pcp-count-inside.patch mm-free_pcppages_bulk-do-not-hold-lock-when-picking-pages-to-free.patch mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock.patch mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lockpatch-v4.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html