+ mm-vmalloc-remove-preempt_disable-enable-when-do-preloading.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/vmalloc: remove preempt_disable/enable when doing preloading
has been added to the -mm tree.  Its filename is
     mm-vmalloc-remove-preempt_disable-enable-when-do-preloading.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-vmalloc-remove-preempt_disable-enable-when-do-preloading.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-vmalloc-remove-preempt_disable-enable-when-do-preloading.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: "Uladzislau Rezki (Sony)" <urezki@xxxxxxxxx>
Subject: mm/vmalloc: remove preempt_disable/enable when doing preloading

Some background.  The preemption was disabled before to guarantee that a
preloaded object is available for a CPU, it was stored for.  This has been
achieved by combining the disabling the preemption and taking the spin
lock while the ne_fit_preload_node is checked resp.  repopulated.

The aim was to not allocate in atomic context when spinlock is taken
later, for regular vmap allocations.  But that approach conflicts with
CONFIG_PREEMPT_RT philosophy.  It means that calling spin_lock() with
disabled preemption is forbidden in the CONFIG_PREEMPT_RT kernel.

Therefore, get rid of preempt_disable() and preempt_enable() when the
preload is done for splitting purpose.  As a result we do not guarantee
now that a CPU is preloaded, instead we minimize the case when it is not,
with this change by populating the per cpu preload pointer under the
vmap_area_lock.  This implies that at least each caller which has done the
preallocation will not fallback to an atomic allocation later.  It is
possible that the preallocation would be pointless or that no
preallocation is done because of the race but your data shows that this is
really rare.

For example I run the special test case that follows the preload pattern
and path.  20 "unbind" threads run it and each does 1000000 allocations. 
Only 3.5 times among 1000000 a CPU was not preloaded.  So it can happen
but the number is negligible.

[mhocko@xxxxxxxx: changelog additions]
Link: http://lkml.kernel.org/r/20191016095438.12391-1-urezki@xxxxxxxxx
Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose")
Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
Reviewed-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
Acked-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Acked-by: Daniel Wagner <dwagner@xxxxxxx>
Acked-by: Michal Hocko <mhocko@xxxxxxxx>
Cc: Hillf Danton <hdanton@xxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@xxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/vmalloc.c |   37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

--- a/mm/vmalloc.c~mm-vmalloc-remove-preempt_disable-enable-when-do-preloading
+++ a/mm/vmalloc.c
@@ -1077,31 +1077,34 @@ static struct vmap_area *alloc_vmap_area
 
 retry:
 	/*
-	 * Preload this CPU with one extra vmap_area object to ensure
-	 * that we have it available when fit type of free area is
-	 * NE_FIT_TYPE.
+	 * Preload this CPU with one extra vmap_area object. It is used
+	 * when fit type of free area is NE_FIT_TYPE. Please note, it
+	 * does not guarantee that an allocation occurs on a CPU that
+	 * is preloaded, instead we minimize the case when it is not.
+	 * It can happen because of cpu migration, because there is a
+	 * race until the below spinlock is taken.
 	 *
 	 * The preload is done in non-atomic context, thus it allows us
 	 * to use more permissive allocation masks to be more stable under
-	 * low memory condition and high memory pressure.
+	 * low memory condition and high memory pressure. In rare case,
+	 * if not preloaded, GFP_NOWAIT is used.
 	 *
-	 * Even if it fails we do not really care about that. Just proceed
-	 * as it is. "overflow" path will refill the cache we allocate from.
+	 * Set "pva" to NULL here, because of "retry" path.
 	 */
-	preempt_disable();
-	if (!__this_cpu_read(ne_fit_preload_node)) {
-		preempt_enable();
-		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
-		preempt_disable();
+	pva = NULL;
 
-		if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
-			if (pva)
-				kmem_cache_free(vmap_area_cachep, pva);
-		}
-	}
+	if (!this_cpu_read(ne_fit_preload_node))
+		/*
+		 * Even if it fails we do not really care about that.
+		 * Just proceed as it is. If needed "overflow" path
+		 * will refill the cache we allocate from.
+		 */
+		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
 
 	spin_lock(&vmap_area_lock);
-	preempt_enable();
+
+	if (pva && __this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva))
+		kmem_cache_free(vmap_area_cachep, pva);
 
 	/*
 	 * If an allocation fails, the "vend" address is
_

Patches currently in -mm which might be from urezki@xxxxxxxxx are

mm-vmalloc-remove-preempt_disable-enable-when-do-preloading.patch
mm-vmalloc-respect-passed-gfp_mask-when-do-preloading.patch
mm-vmalloc-add-more-comments-to-the-adjust_va_to_fit_type.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux