[obsolete] mm-page_alloc-re-enable-softirq-use-of-per-cpu-page-allocator.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm, page_alloc: re-enable softirq use of per-cpu page allocator
has been removed from the -mm tree.  Its filename was
     mm-page_alloc-re-enable-softirq-use-of-per-cpu-page-allocator.patch

This patch was dropped because it is obsolete

------------------------------------------------------
From: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>
Subject: mm, page_alloc: re-enable softirq use of per-cpu page allocator

IRQ context were excluded from using the Per-Cpu-Pages (PCP) lists caching
of order-0 pages in commit 374ad05ab64d ("mm, page_alloc: only use per-cpu
allocator for irq-safe requests").

This unfortunately also included excluded SoftIRQ.  This hurt the
performance for the use-case of refilling DMA RX rings in softirq context.

This patch re-allow softirq context, which should be safe by disabling
BH/softirq, while accessing the list.  PCP-lists access from both hard-IRQ
and NMI context must not be allowed.  Peter Zijlstra says in_nmi() code
never access the page allocator, thus it should be sufficient to only test
for !in_irq().

One concern with this change is adding a BH (enable) scheduling point at
both PCP alloc and free.  If further concerns are highlighted by this
patch, the result will be to revert 374ad05ab64d and try again at a later
date to offset the irq enable/disable overhead.

Fixes: 374ad05ab64d ("mm, page_alloc: only use per-cpu allocator for irq-safe requests")
Link: http://lkml.kernel.org/r/20170410150821.vcjlz7ntabtfsumm@xxxxxxxxxxxxxxxxxxx
Signed-off-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>
Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Reported-by: Tariq Toukan <tariqt@xxxxxxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Pankaj Gupta <pagupta@xxxxxxxxxx>
Cc: Tariq Toukan <ttoukan.linux@xxxxxxxxx>
Cc: Saeed Mahameed <saeedm@xxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/page_alloc.c |   26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff -puN mm/page_alloc.c~mm-page_alloc-re-enable-softirq-use-of-per-cpu-page-allocator mm/page_alloc.c
--- a/mm/page_alloc.c~mm-page_alloc-re-enable-softirq-use-of-per-cpu-page-allocator
+++ a/mm/page_alloc.c
@@ -2351,9 +2351,9 @@ static void drain_local_pages_wq(struct
 	 * cpu which is allright but we also have to make sure to not move to
 	 * a different one.
 	 */
-	preempt_disable();
+	local_bh_disable();
 	drain_local_pages(NULL);
-	preempt_enable();
+	local_bh_enable();
 }
 
 /*
@@ -2488,7 +2488,11 @@ void free_hot_cold_page(struct page *pag
 	unsigned long pfn = page_to_pfn(page);
 	int migratetype;
 
-	if (in_interrupt()) {
+	/*
+	 * Exclude (hard) IRQ and NMI context from using the pcplists.
+	 * But allow softirq context, via disabling BH.
+	 */
+	if (in_irq() || irqs_disabled()) {
 		__free_pages_ok(page, 0);
 		return;
 	}
@@ -2498,7 +2502,7 @@ void free_hot_cold_page(struct page *pag
 
 	migratetype = get_pfnblock_migratetype(page, pfn);
 	set_pcppage_migratetype(page, migratetype);
-	preempt_disable();
+	local_bh_disable();
 
 	/*
 	 * We only track unmovable, reclaimable and movable on pcp lists.
@@ -2529,7 +2533,7 @@ void free_hot_cold_page(struct page *pag
 	}
 
 out:
-	preempt_enable();
+	local_bh_enable();
 }
 
 /*
@@ -2654,7 +2658,7 @@ static struct page *__rmqueue_pcplist(st
 {
 	struct page *page;
 
-	VM_BUG_ON(in_interrupt());
+	VM_BUG_ON(in_irq() || irqs_disabled());
 
 	do {
 		if (list_empty(list)) {
@@ -2687,7 +2691,7 @@ static struct page *rmqueue_pcplist(stru
 	bool cold = ((gfp_flags & __GFP_COLD) != 0);
 	struct page *page;
 
-	preempt_disable();
+	local_bh_disable();
 	pcp = &this_cpu_ptr(zone->pageset)->pcp;
 	list = &pcp->lists[migratetype];
 	page = __rmqueue_pcplist(zone,  migratetype, cold, pcp, list);
@@ -2695,7 +2699,7 @@ static struct page *rmqueue_pcplist(stru
 		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
 		zone_statistics(preferred_zone, zone);
 	}
-	preempt_enable();
+	local_bh_enable();
 	return page;
 }
 
@@ -2711,7 +2715,11 @@ struct page *rmqueue(struct zone *prefer
 	unsigned long flags;
 	struct page *page;
 
-	if (likely(order == 0) && !in_interrupt()) {
+	/*
+	 * Exclude (hard) IRQ and NMI context from using the pcplists.
+	 * But allow softirq context, via disabling BH.
+	 */
+	if (likely(order == 0) && !(in_irq() || irqs_disabled()) ) {
 		page = rmqueue_pcplist(preferred_zone, zone, order,
 				gfp_flags, migratetype);
 		goto out;
_

Patches currently in -mm which might be from brouer@xxxxxxxxxx are


--
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux