On Fri, Jun 28, 2019 at 04:31:38PM +0900, Byungchul Park wrote: > On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote: > > On Thu, Jun 27, 2019 at 03:17:27PM -0500, Scott Wood wrote: > > > On Thu, 2019-06-27 at 11:41 -0700, Paul E. McKenney wrote: > > > > On Thu, Jun 27, 2019 at 02:16:38PM -0400, Joel Fernandes wrote: > > > > > > > > > > I think the fix should be to prevent the wake-up not based on whether we > > > > > are > > > > > in hard/soft-interrupt mode but that we are doing the rcu_read_unlock() > > > > > from > > > > > a scheduler path (if we can detect that) > > > > > > > > Or just don't do the wakeup at all, if it comes to that. I don't know > > > > of any way to determine whether rcu_read_unlock() is being called from > > > > the scheduler, but it has been some time since I asked Peter Zijlstra > > > > about that. > > > > > > > > Of course, unconditionally refusing to do the wakeup might not be happy > > > > thing for NO_HZ_FULL kernels that don't implement IRQ work. > > > > > > Couldn't smp_send_reschedule() be used instead? > > > > Good point. If current -rcu doesn't fix things for Sebastian's case, > > that would be well worth looking at. But there must be some reason > > why Peter Zijlstra didn't suggest it when he instead suggested using > > the IRQ work approach. > > > > Peter, thoughts? > +cc kernel-team@xxxxxxx (I'm sorry for more noise on the thread.) > Hello, > > Isn't the following scenario possible? > > The original code > ----------------- > rcu_read_lock(); > ... > /* Experdite */ > WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, true); > ... > __rcu_read_unlock(); > if (unlikely(READ_ONCE(t->rcu_read_unlock_special.s))) > rcu_read_unlock_special(t); > WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, false); > rcu_preempt_deferred_qs_irqrestore(t, flags); > barrier(); /* ->rcu_read_unlock_special load before assign */ > t->rcu_read_lock_nesting = 0; > > The reordered code by machine > ----------------------------- > rcu_read_lock(); > ... > /* Experdite */ > WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, true); > ... > __rcu_read_unlock(); > if (unlikely(READ_ONCE(t->rcu_read_unlock_special.s))) > rcu_read_unlock_special(t); > t->rcu_read_lock_nesting = 0; <--- LOOK AT THIS!!! > WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, false); > rcu_preempt_deferred_qs_irqrestore(t, flags); > barrier(); /* ->rcu_read_unlock_special load before assign */ > > An interrupt happens > -------------------- > rcu_read_lock(); > ... > /* Experdite */ > WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, true); > ... > __rcu_read_unlock(); > if (unlikely(READ_ONCE(t->rcu_read_unlock_special.s))) > rcu_read_unlock_special(t); > t->rcu_read_lock_nesting = 0; <--- LOOK AT THIS!!! > <--- Handle an (any) irq > rcu_read_lock(); > /* This call should be skipped */ > rcu_read_unlock_special(t); > WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, false); > rcu_preempt_deferred_qs_irqrestore(t, flags); > barrier(); /* ->rcu_read_unlock_special load before assign */ > > We don't have to handle the special thing twice like this which is one > reason to cause the problem even though another problem is of course to > call ttwu w/o being aware it's within a context holding pi lock. > > Apart from the discussion about how to avoid ttwu in an improper > condition, I think the following is necessary. I may have something > missing. It would be appreciated if you let me know in case I'm wrong. > > Anyway, logically I think we should prevent reordering between > t->rcu_read_lock_nesting and t->rcu_read_unlock_special.b.exp_hint not > only by compiler but also by machine like the below. > > Do I miss something? > > Thanks, > Byungchul > > ---8<--- > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index 3c8444e..9b137f1 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -412,7 +412,13 @@ void __rcu_read_unlock(void) > barrier(); /* assign before ->rcu_read_unlock_special load */ > if (unlikely(READ_ONCE(t->rcu_read_unlock_special.s))) > rcu_read_unlock_special(t); > - barrier(); /* ->rcu_read_unlock_special load before assign */ > + /* > + * Prevent reordering between clearing > + * t->rcu_reak_unlock_special in > + * rcu_read_unlock_special() and the following > + * assignment to t->rcu_read_lock_nesting. > + */ > + smp_wmb(); > t->rcu_read_lock_nesting = 0; > } > if (IS_ENABLED(CONFIG_PROVE_LOCKING)) { > >