On Sat, Oct 28, 2023 at 12:46:28AM +0200, Peter Zijlstra wrote: > On Fri, Oct 27, 2023 at 02:23:56PM -0700, Paul E. McKenney wrote: > > On Fri, Oct 27, 2023 at 09:20:26PM +0200, Peter Zijlstra wrote: > > > On Fri, Oct 27, 2023 at 04:40:48PM +0200, Frederic Weisbecker wrote: > > > > > > > + /* Has the task been seen voluntarily sleeping? */ > > > > + if (!READ_ONCE(t->on_rq)) > > > > + return false; > > > > > > > - if (t != current && READ_ONCE(t->on_rq) && !is_idle_task(t)) { > > > > > > AFAICT this ->on_rq usage is outside of scheduler locks and that > > > READ_ONCE isn't going to help much. > > > > > > Obviously a pre-existing issue, and I suppose all it cares about is > > > seeing a 0 or not, irrespective of the races, but urgh.. > > > > The trick is that RCU Tasks only needs to spot a task voluntarily blocked > > once at any point in the grace period. The beginning and end of the > > grace-period process have full barriers, so if this code sees t->on_rq > > equal to zero, we know that the task was voluntarily blocked at some > > point during the grace period, as required. > > > > In theory, we could acquire a scheduler lock, but in practice this would > > cause CPU-latency problems at a certain set of large datacenters, and > > for once, not the datacenters operated by my employer. > > > > In theory, we could make separate lists of tasks that we need to wait on, > > thus avoiding the need to scan the full task list, but in practice this > > would require a synchronized linked-list operation on every voluntary > > context switch, both in and out. > > > > In theory, the task list could sharded, so that it could be scanned > > incrementally, but in practice, this is a bit non-trivial. Though this > > particular use case doesn't care about new tasks, so it could live with > > something simpler than would be required for certain types of signal > > delivery. > > > > In theory, we could place rcu_segcblist-like mid pointers into the > > task list, so that scans could restart from any mid pointer. Care is > > required because the mid pointers would likely need to be recycled as > > new tasks are added. Plus care is needed because it has been a good > > long time since I have looked at the code managing the tasks list, > > and I am probably woefully out of date on how it all works. > > > > So, is there a better way? > > Nah, this is more or less what I feared. I just worry people will come > around and put WRITE_ONCE() on the other end. I don't think that'll buy > us much. Nor do I think the current READ_ONCE()s actually matter. My friend, you trust compilers more than I ever will. ;-) > But perhaps put a comment there, that we don't care for the races and > only need to observe a 0 once or something. There are these two passagers in the big lock comment preceding the RCU Tasks code: // rcu_tasks_pregp_step(): // Invokes synchronize_rcu() in order to wait for all in-flight // t->on_rq and t->nvcsw transitions to complete. This works because // all such transitions are carried out with interrupts disabled. and: // rcu_tasks_postgp(): // Invokes synchronize_rcu() in order to ensure that all prior // t->on_rq and t->nvcsw transitions are seen by all CPUs and tasks // to have happened before the end of this RCU Tasks grace period. // Again, this works because all such transitions are carried out // with interrupts disabled. The rcu_tasks_pregp_step() function contains this comment: /* * Wait for all pre-existing t->on_rq and t->nvcsw transitions * to complete. Invoking synchronize_rcu() suffices because all * these transitions occur with interrupts disabled. Without this * synchronize_rcu(), a read-side critical section that started * before the grace period might be incorrectly seen as having * started after the grace period. * * This synchronize_rcu() also dispenses with the need for a * memory barrier on the first store to t->rcu_tasks_holdout, * as it forces the store to happen after the beginning of the * grace period. */ And the rcu_tasks_postgp() function contains this comment: /* * Because ->on_rq and ->nvcsw are not guaranteed to have a full * memory barriers prior to them in the schedule() path, memory * reordering on other CPUs could cause their RCU-tasks read-side * critical sections to extend past the end of the grace period. * However, because these ->nvcsw updates are carried out with * interrupts disabled, we can use synchronize_rcu() to force the * needed ordering on all such CPUs. * * This synchronize_rcu() also confines all ->rcu_tasks_holdout * accesses to be within the grace period, avoiding the need for * memory barriers for ->rcu_tasks_holdout accesses. * * In addition, this synchronize_rcu() waits for exiting tasks * to complete their final preempt_disable() region of execution, * cleaning up after synchronize_srcu(&tasks_rcu_exit_srcu), * enforcing the whole region before tasklist removal until * the final schedule() with TASK_DEAD state to be an RCU TASKS * read side critical section. */ Does that suffice, or should we add more? Thanx, Paul