The patch titled Subject: mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2 has been added to the -mm tree. Its filename is mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Aaron Lu <aaron.lu@xxxxxxxxx> Subject: mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2 Use a helper function to prefetch buddy as suggested by Dave Hansen. Drop the change of list_add_tail() to avoid disordering page. Link: http://lkml.kernel.org/r/20180301062845.26038-4-aaron.lu@xxxxxxxxx Link: http://lkml.kernel.org/r/20180320113146.GB24737@xxxxxxxxx Signed-off-by: Aaron Lu <aaron.lu@xxxxxxxxx> Suggested-by: Ying Huang <ying.huang@xxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- diff -puN mm/page_alloc.c~mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2 mm/page_alloc.c --- a/mm/page_alloc.c~mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2 +++ a/mm/page_alloc.c @@ -1063,6 +1063,15 @@ static bool bulkfree_pcp_prepare(struct } #endif /* CONFIG_DEBUG_VM */ +static inline void prefetch_buddy(struct page *page) +{ + unsigned long pfn = page_to_pfn(page); + unsigned long buddy_pfn = __find_buddy_pfn(pfn, 0); + struct page *buddy = page + (buddy_pfn - pfn); + + prefetch(buddy); +} + /* * Frees a number of pages from the PCP lists * Assumes all pages on list are in same zone, and of same order. @@ -1079,6 +1088,7 @@ static void free_pcppages_bulk(struct zo { int migratetype = 0; int batch_free = 0; + int prefetch_nr = 0; bool isolated_pageblocks; struct page *page, *tmp; LIST_HEAD(head); @@ -1105,9 +1115,6 @@ static void free_pcppages_bulk(struct zo batch_free = count; do { - unsigned long pfn, buddy_pfn; - struct page *buddy; - page = list_last_entry(list, struct page, lru); /* must delete to avoid corrupting pcp list */ list_del(&page->lru); @@ -1116,7 +1123,7 @@ static void free_pcppages_bulk(struct zo if (bulkfree_pcp_prepare(page)) continue; - list_add(&page->lru, &head); + list_add_tail(&page->lru, &head); /* * We are going to put the page back to the global @@ -1125,14 +1132,10 @@ static void free_pcppages_bulk(struct zo * an additional test and calculating buddy_pfn here * can be offset by reduced memory latency later. To * avoid excessive prefetching due to large count, only - * prefetch buddy for the last pcp->batch nr of pages. + * prefetch buddy for the first pcp->batch nr of pages. */ - if (count > pcp->batch) - continue; - pfn = page_to_pfn(page); - buddy_pfn = __find_buddy_pfn(pfn, 0); - buddy = page + (buddy_pfn - pfn); - prefetch(buddy); + if (prefetch_nr++ < pcp->batch) + prefetch_buddy(page); } while (--count && --batch_free && !list_empty(list)); } _ Patches currently in -mm which might be from aaron.lu@xxxxxxxxx are mm-free_pcppages_bulk-update-pcp-count-inside.patch mm-free_pcppages_bulk-do-not-hold-lock-when-picking-pages-to-free.patch mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock.patch mm-free_pcppages_bulk-prefetch-buddy-while-not-holding-lock-v4-update2.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html