On 2019-10-11 00:33:18 [+0200], Uladzislau Rezki (Sony) wrote: > Get rid of preempt_disable() and preempt_enable() when the > preload is done for splitting purpose. The reason is that > calling spin_lock() with disabled preemtion is forbidden in > CONFIG_PREEMPT_RT kernel. > > Therefore, we do not guarantee that a CPU is preloaded, instead > we minimize the case when it is not with this change. > > For example i run the special test case that follows the preload > pattern and path. 20 "unbind" threads run it and each does > 1000000 allocations. Only 3.5 times among 1000000 a CPU was > not preloaded. So it can happen but the number is negligible. > > V1 -> V2: > - move __this_cpu_cmpxchg check when spin_lock is taken, > as proposed by Andrew Morton > - add more explanation in regard of preloading > - adjust and move some comments > > Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose") > Reviewed-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> Acked-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Thank you. Sebastian