On Thu, Feb 25, 2021 at 03:20:34PM -0500, Mathieu Desnoyers wrote: > ----- On Feb 25, 2021, at 1:33 PM, paulmck paulmck@xxxxxxxxxx wrote: > [...] > > commit 581f79546b6be406a9c7280b2d3511b60821efe0 > > Author: Paul E. McKenney <paulmck@xxxxxxxxxx> > > Date: Thu Feb 25 10:26:00 2021 -0800 > > > > rcu-tasks: Add block comment laying out RCU Tasks Trace design > > > > This commit adds a block comment that gives a high-level overview of > > how RCU tasks trace grace periods progress. It also adds a note about > > how exiting tasks are handles, plus it gives an overview of the memory > > handles -> handled Good eyes, fixed! > > ordering. > > > > Reported-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > > Reported-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > > > > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h > > index 17c8ebe..f818357 100644 > > --- a/kernel/rcu/tasks.h > > +++ b/kernel/rcu/tasks.h > > @@ -726,6 +726,42 @@ EXPORT_SYMBOL_GPL(show_rcu_tasks_rude_gp_kthread); > > // flavors, rcu_preempt and rcu_sched. The fact that RCU Tasks Trace > > // readers can operate from idle, offline, and exception entry/exit in no > > // way allows rcu_preempt and rcu_sched readers to also do so. > > +// > > +// The implementation uses rcu_tasks_wait_gp(), which relies on function > > +// pointers in the rcu_tasks structure. The rcu_spawn_tasks_trace_kthread() > > +// function sets these function pointers up so that rcu_tasks_wait_gp() > > +// invokes these functions in this order: > > +// > > +// rcu_tasks_trace_pregp_step(): > > +// Initialize the count of readers and block CPU-hotplug operations. > > +// rcu_tasks_trace_pertask(), invoked on every non-idle task: > > +// Initialize per-task state and attempt to identify an immediate > > +// quiescent state for that task, or, failing that, attempt to set > > +// that task's .need_qs flag so that that task's next outermost > > +// rcu_read_unlock_trace() will report the quiescent state (in which > > +// case the count of readers is incremented). If both attempts fail, > > +// the task is added to a "holdout" list. > > +// rcu_tasks_trace_postscan(): > > +// Initialize state and attempt to identify an immediate quiescent > > +// state as above (but only for idle tasks), unblock CPU-hotplug > > +// operations, and wait for an RCU grace period to avoid races with > > +// tasks that are in the process of exiting. > > +// check_all_holdout_tasks_trace(), repeatedly until holdout list is empty: > > +// Scans the holdout list, attempting to identify a quiescent state > > +// for each task on the list. If there is a quiescent state, the > > +// corresponding task is removed from the holdout list. > > +// rcu_tasks_trace_postgp(): > > +// Wait for the count of readers do drop to zero, reporting any stalls. > > +// Also execute full memory barriers to maintain ordering with code > > +// executing after the grace period. > > +// > > +// The exit_tasks_rcu_finish_trace() synchronizes with exiting tasks. > > +// > > +// Pre-grace-period update-side code is ordered before the grace > > +// period via the ->cbs_lock and barriers in rcu_tasks_kthread(). > > +// Pre-grace-period read-side code is ordered before the grace period by > > +// atomic_dec_and_test() of the count of readers (for IPIed readers) and by > > +// scheduler context-switch ordering (for locked-down non-running readers). > > The rest looks good, thanks! Thank you for looking it over! Thanx, Paul