[to-be-updated] mm-vmalloc-remove-preempt_disable-enable-when-do-preloading.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/vmalloc: remove preempt_disable/enable when doing preloading
has been removed from the -mm tree.  Its filename was
     mm-vmalloc-remove-preempt_disable-enable-when-do-preloading.patch

This patch was dropped because an updated version will be merged

------------------------------------------------------
From: "Uladzislau Rezki (Sony)" <urezki@xxxxxxxxx>
Subject: mm/vmalloc: remove preempt_disable/enable when doing preloading

Get rid of preempt_disable() and preempt_enable() when the preload is done
for splitting purpose.  The reason is that calling spin_lock() with
disabled preemption is forbidden in CONFIG_PREEMPT_RT kernel.

Therefore, we do not guarantee that a CPU is preloaded, instead we
minimize the case when it is not with this change.

For example, run the special test case that follows the preload pattern
and path.  20 "unbind" threads run it and each does 1000000 allocations. 
Only 3.5 times among 1000000 a CPU was not preloaded thus.  So it can
happen but the number is rather negligible.

Link: http://lkml.kernel.org/r/20191009164934.10166-1-urezki@xxxxxxxxx
Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose")
Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
Reviewed-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
Acked-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Acked-by: Daniel Wagner <dwagner@xxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Hillf Danton <hdanton@xxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@xxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/vmalloc.c |   17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

--- a/mm/vmalloc.c~mm-vmalloc-remove-preempt_disable-enable-when-do-preloading
+++ a/mm/vmalloc.c
@@ -1077,9 +1077,12 @@ static struct vmap_area *alloc_vmap_area
 
 retry:
 	/*
-	 * Preload this CPU with one extra vmap_area object to ensure
-	 * that we have it available when fit type of free area is
-	 * NE_FIT_TYPE.
+	 * Preload this CPU with one extra vmap_area object. It is used
+	 * when fit type of free area is NE_FIT_TYPE. Please note, it
+	 * does not guarantee that an allocation occurs on a CPU that
+	 * is preloaded, instead we minimize the case when it is not.
+	 * It can happen because of migration, because there is a race
+	 * until the below spinlock is taken.
 	 *
 	 * The preload is done in non-atomic context, thus it allows us
 	 * to use more permissive allocation masks to be more stable under
@@ -1088,20 +1091,16 @@ retry:
 	 * Even if it fails we do not really care about that. Just proceed
 	 * as it is. "overflow" path will refill the cache we allocate from.
 	 */
-	preempt_disable();
-	if (!__this_cpu_read(ne_fit_preload_node)) {
-		preempt_enable();
+	if (!this_cpu_read(ne_fit_preload_node)) {
 		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
-		preempt_disable();
 
-		if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
+		if (this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
 			if (pva)
 				kmem_cache_free(vmap_area_cachep, pva);
 		}
 	}
 
 	spin_lock(&vmap_area_lock);
-	preempt_enable();
 
 	/*
 	 * If an allocation fails, the "vend" address is
_

Patches currently in -mm which might be from urezki@xxxxxxxxx are





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux