On 01/12/2017 11:42 AM, Mel Gorman wrote:
buffered_rmqueue removes a page from a given zone and uses the per-cpu list for order-0. This is fine but a hypothetical caller that wanted multiple order-0 pages has to disable/reenable interrupts multiple times. This patch structures buffere_rmqueue such that it's relatively easy to build a bulk order-0 page allocator. There is no functional change.
Strictly speaking, this will now skip VM_BUG_ON_PAGE(bad_range(...)) for order-0 allocations. Do we care?
Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Acked-by: Hillf Danton <hillf.zj@xxxxxxxxxxxxxxx> --- mm/page_alloc.c | 126 ++++++++++++++++++++++++++++++++++---------------------- 1 file changed, 77 insertions(+), 49 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2c6d5f64feca..d8798583eaf8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2610,68 +2610,96 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z, #endif } +/* Remote page from the per-cpu list, caller must protect the list */
^ Remove
+static struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, + gfp_t gfp_flags, int migratetype, bool cold,
order and gfp_flags seem unused here
+ struct per_cpu_pages *pcp, struct list_head *list) +{ + struct page *page;
-- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>