On Thu, Apr 29, 2021 at 02:17:58PM -0700, Michel Lespinasse wrote: > On Thu, Apr 29, 2021 at 11:34:12AM -0700, Paul E. McKenney wrote: > > ------------------------------------------------------------------------ > > > > commit 97262c64c2cf807bf06825e454c4bedd228fadfb > > Author: Paul E. McKenney <paulmck@xxxxxxxxxx> > > Date: Thu Apr 29 11:18:01 2021 -0700 > > > > rcu: Improve comments describing RCU read-side critical sections > > > > There are a number of places that call out the fact that preempt-disable > > regions of code now act as RCU read-side critical sections, where > > preempt-disable regions of code include irq-disable regions of code, > > bh-disable regions of code, hardirq handlers, and NMI handlers. However, > > someone relying solely on (for example) the call_rcu() header comment > > might well have no idea that preempt-disable regions of code have RCU > > semantics. > > > > This commit therefore updates the header comments for > > call_rcu(), synchronize_rcu(), rcu_dereference_bh_check(), and > > rcu_dereference_sched_check() to call out these new(ish) forms of RCU > > readers. > > > > Reported-by: Michel Lespinasse <michel@xxxxxxxxxxxxxx> > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > > > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h > > index a10480f2b4ef..c01b04ad64c4 100644 > > --- a/include/linux/rcupdate.h > > +++ b/include/linux/rcupdate.h > > @@ -532,7 +532,10 @@ do { \ > > * @p: The pointer to read, prior to dereferencing > > * @c: The conditions under which the dereference will take place > > * > > - * This is the RCU-bh counterpart to rcu_dereference_check(). > > + * This is the RCU-bh counterpart to rcu_dereference_check(). However, > > + * please note that in recent kernels, synchronize_rcu() waits for > > + * local_bh_disable() regions of code in addition to regions of code > > + * demarked by rcu_read_lock() and rcu_read_unlock(). > > Two things: > - "recent kernels" could be clarified, as Matthew pointed out > - The above is not 100% clear if call_rcu() also waits for > local_bh_disable() regions of code ? (you did clarify this in tree.c > but I think it's better to have that here as well) Good points, I updated both. > > */ > > #define rcu_dereference_bh_check(p, c) \ > > __rcu_dereference_check((p), (c) || rcu_read_lock_bh_held(), __rcu) > > @@ -543,6 +546,9 @@ do { \ > > * @c: The conditions under which the dereference will take place > > * > > * This is the RCU-sched counterpart to rcu_dereference_check(). > > + * However, please note that in recent kernels, synchronize_rcu() waits > > + * for preemption-disabled regions of code in addition to regions of code > > + * demarked by rcu_read_lock() and rcu_read_unlock(). > > Same comments regarding "recent kernels" and call_rcu() here. And here as well. > > */ > > #define rcu_dereference_sched_check(p, c) \ > > __rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \ > > @@ -634,6 +640,12 @@ do { \ > > * sections, invocation of the corresponding RCU callback is deferred > > * until after the all the other CPUs exit their critical sections. > > * > > + * In recent kernels, synchronize_rcu() and call_rcu() also wait for > > + * regions of code with preemption disabled, including regions of code > > + * with interrupts or softirqs disabled. If your kernel is old enough > > + * for synchronize_sched() to be defined, only code enclosed within > > + * rcu_read_lock() and rcu_read_unlock() are guaranteed to be waited for. > > + * > > Thanks, this is the quote I was looking for, and also I think it's > important for it to be in rcupdate.h rather than any .c implementation > (I think it's more natural to look at headers for this kind of stuff). > > Same comment regarding "old enough" / "recent kernels" though. > > > * Note, however, that RCU callbacks are permitted to run concurrently > > * with new RCU read-side critical sections. One way that this can happen > > * is via the following sequence of events: (1) CPU 0 enters an RCU > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > The tree.c changes look fine to me. I added the version here also. > Thanks a lot for looking into this ! And here is the updated commit. Thoughts? Thanx, Paul ------------------------------------------------------------------------ commit cc5a0ad5aa52d26379d5cd04d0a8f0917caf7365 Author: Paul E. McKenney <paulmck@xxxxxxxxxx> Date: Thu Apr 29 11:18:01 2021 -0700 rcu: Improve comments describing RCU read-side critical sections There are a number of places that call out the fact that preempt-disable regions of code now act as RCU read-side critical sections, where preempt-disable regions of code include irq-disable regions of code, bh-disable regions of code, hardirq handlers, and NMI handlers. However, someone relying solely on (for example) the call_rcu() header comment might well have no idea that preempt-disable regions of code have RCU semantics. This commit therefore updates the header comments for call_rcu(), synchronize_rcu(), rcu_dereference_bh_check(), and rcu_dereference_sched_check() to call out these new(ish) forms of RCU readers. Reported-by: Michel Lespinasse <michel@xxxxxxxxxxxxxx> [ paulmck: Apply Matthew Wilcox and Michel Lespinasse feedback. ] Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index a10480f2b4ef..adc2043e92db 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -532,7 +532,12 @@ do { \ * @p: The pointer to read, prior to dereferencing * @c: The conditions under which the dereference will take place * - * This is the RCU-bh counterpart to rcu_dereference_check(). + * This is the RCU-bh counterpart to rcu_dereference_check(). However, + * please note that starting in v5.0 kernels, vanilla RCU grace periods + * wait for local_bh_disable() regions of code in addition to regions of + * code demarked by rcu_read_lock() and rcu_read_unlock(). This means + * that synchronize_rcu(), call_rcu, and friends all take not only + * rcu_read_lock() but also rcu_read_lock_bh() into account. */ #define rcu_dereference_bh_check(p, c) \ __rcu_dereference_check((p), (c) || rcu_read_lock_bh_held(), __rcu) @@ -543,6 +548,11 @@ do { \ * @c: The conditions under which the dereference will take place * * This is the RCU-sched counterpart to rcu_dereference_check(). + * However, please note that starting in v5.0 kernels, vanilla RCU grace + * periods wait for preempt_disable() regions of code in addition to + * regions of code demarked by rcu_read_lock() and rcu_read_unlock(). + * This means that synchronize_rcu(), call_rcu, and friends all take not + * only rcu_read_lock() but also rcu_read_lock_sched() into account. */ #define rcu_dereference_sched_check(p, c) \ __rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \ @@ -634,6 +644,12 @@ do { \ * sections, invocation of the corresponding RCU callback is deferred * until after the all the other CPUs exit their critical sections. * + * In recent kernels, synchronize_rcu() and call_rcu() also wait for + * regions of code with preemption disabled, including regions of code + * with interrupts or softirqs disabled. If your kernel is old enough + * for synchronize_sched() to be defined, only code enclosed within + * rcu_read_lock() and rcu_read_unlock() are guaranteed to be waited for. + * * Note, however, that RCU callbacks are permitted to run concurrently * with new RCU read-side critical sections. One way that this can happen * is via the following sequence of events: (1) CPU 0 enters an RCU @@ -728,9 +744,11 @@ static inline void rcu_read_unlock(void) /** * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section * - * This is equivalent of rcu_read_lock(), but also disables softirqs. - * Note that anything else that disables softirqs can also serve as - * an RCU read-side critical section. + * This is equivalent to rcu_read_lock(), but also disables softirqs. + * Note that anything else that disables softirqs can also serve as an RCU + * read-side critical section. However, please note that this equivalence + * applies only to v5.0 and later. Before v5.0, rcu_read_lock() and + * rcu_read_lock_bh() were unrelated. * * Note that rcu_read_lock_bh() and the matching rcu_read_unlock_bh() * must occur in the same context, for example, it is illegal to invoke @@ -763,9 +781,12 @@ static inline void rcu_read_unlock_bh(void) /** * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section * - * This is equivalent of rcu_read_lock(), but disables preemption. - * Read-side critical sections can also be introduced by anything else - * that disables preemption, including local_irq_disable() and friends. + * This is equivalent to rcu_read_lock(), but also disables preemption. + * Read-side critical sections can also be introduced by anything else that + * disables preemption, including local_irq_disable() and friends. However, + * please note that the equivalence to rcu_read_lock() applies only to + * v5.0 and later. Before v5.0, rcu_read_lock() and rcu_read_lock_sched() + * were unrelated. * * Note that rcu_read_lock_sched() and the matching rcu_read_unlock_sched() * must occur in the same context, for example, it is illegal to invoke diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 9ea1d4eef1ad..9089c23e80dc 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3071,12 +3071,14 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func) * period elapses, in other words after all pre-existing RCU read-side * critical sections have completed. However, the callback function * might well execute concurrently with RCU read-side critical sections - * that started after call_rcu() was invoked. RCU read-side critical - * sections are delimited by rcu_read_lock() and rcu_read_unlock(), and - * may be nested. In addition, regions of code across which interrupts, - * preemption, or softirqs have been disabled also serve as RCU read-side - * critical sections. This includes hardware interrupt handlers, softirq - * handlers, and NMI handlers. + * that started after call_rcu() was invoked. + * + * RCU read-side critical sections are delimited by rcu_read_lock() + * and rcu_read_unlock(), and may be nested. In addition, but only in + * v5.0 and later, regions of code across which interrupts, preemption, + * or softirqs have been disabled also serve as RCU read-side critical + * sections. This includes hardware interrupt handlers, softirq handlers, + * and NMI handlers. * * Note that all CPUs must agree that the grace period extended beyond * all pre-existing RCU read-side critical section. On systems with more @@ -3771,10 +3773,12 @@ static int rcu_blocking_is_gp(void) * read-side critical sections have completed. Note, however, that * upon return from synchronize_rcu(), the caller might well be executing * concurrently with new RCU read-side critical sections that began while - * synchronize_rcu() was waiting. RCU read-side critical sections are - * delimited by rcu_read_lock() and rcu_read_unlock(), and may be nested. - * In addition, regions of code across which interrupts, preemption, or - * softirqs have been disabled also serve as RCU read-side critical + * synchronize_rcu() was waiting. + * + * RCU read-side critical sections are delimited by rcu_read_lock() + * and rcu_read_unlock(), and may be nested. In addition, but only in + * v5.0 and later, regions of code across which interrupts, preemption, + * or softirqs have been disabled also serve as RCU read-side critical * sections. This includes hardware interrupt handlers, softirq handlers, * and NMI handlers. *