On Thu, Jul 16, 2020 at 09:36:47AM -0400, Joel Fernandes wrote: > On Thu, Jul 16, 2020 at 11:19:13AM +0200, Uladzislau Rezki wrote: > > On Wed, Jul 15, 2020 at 07:13:33PM -0400, Joel Fernandes wrote: > > > On Wed, Jul 15, 2020 at 2:56 PM Sebastian Andrzej Siewior > > > <bigeasy@xxxxxxxxxxxxx> wrote: > > > > > > > > On 2020-07-15 20:35:37 [+0200], Uladzislau Rezki (Sony) wrote: > > > > > @@ -3306,6 +3307,9 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr) > > > > > if (IS_ENABLED(CONFIG_PREEMPT_RT)) > > > > > return false; > > > > > > > > > > + preempt_disable(); > > > > > + krc_this_cpu_unlock(*krcp, *flags); > > > > > > > > Now you enter memory allocator with disabled preemption. This isn't any > > > > better but we don't have a warning for this yet. > > > > What happened to the part where I asked for a spinlock_t? > > > > > > Ulad, > > > Wouldn't the replacing of preempt_disable() with migrate_disable() > > > above resolve Sebastian's issue? > > > > > This for regular kernel only. That means that migrate_disable() is > > equal to preempt_disable(). So, no difference. > > But this will force preempt_disable() context into the low-level page > allocator on -RT kernels which I believe is not what Sebastian wants. The > whole reason why the spinlock vs raw-spinlock ordering matters is, because on > RT, the spinlock is sleeping. So if you have: > > raw_spin_lock(..); > spin_lock(..); <-- can sleep on RT, so Sleep while atomic (SWA) violation. > > That's the main reason you are dropping the lock before calling the > allocator. > No. Please read the commit message of this patch. This is for regular kernel. You did a patch: <snip> if (IS_ENABLED(CONFIG_PREEMPT_RT)) return false; <snip> -- Vlad Rezki