On Thu, Oct 03, 2019 at 11:09:06AM +0200, Daniel Wagner wrote: > Replace preempt_enable() and preempt_disable() with the vmap_area_lock > spin_lock instead. Calling spin_lock() with preempt disabled is > illegal for -rt. Furthermore, enabling preemption inside the > spin_lock() doesn't really make sense. > > Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for > split purpose") > Cc: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > Signed-off-by: Daniel Wagner <dwagner@xxxxxxx> > --- > mm/vmalloc.c | 9 +++------ > 1 file changed, 3 insertions(+), 6 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 08c134aa7ff3..0d1175673583 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1091,11 +1091,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > * Even if it fails we do not really care about that. Just proceed > * as it is. "overflow" path will refill the cache we allocate from. > */ > - preempt_disable(); > + spin_lock(&vmap_area_lock); > if (!__this_cpu_read(ne_fit_preload_node)) { > - preempt_enable(); > + spin_unlock(&vmap_area_lock); > pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); > - preempt_disable(); > + spin_lock(&vmap_area_lock); > > if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { > if (pva) > @@ -1103,9 +1103,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > } > } > > - spin_lock(&vmap_area_lock); > - preempt_enable(); > - > /* > * If an allocation fails, the "vend" address is > * returned. Therefore trigger the overflow path. > -- > 2.16.4 > Some background. The idea was to avoid of touching several times vmap_area_lock, therefore there are preempt_disable()/preempt_enable() instead, in order to stay on the same CPU. When it comes to PREEMPT_RT it is a problem, so Reviewed-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> -- Vlad Rezki