On Fri, Mar 13, 2020 at 03:41:46PM +0100, Frederic Weisbecker wrote: > On Thu, Mar 12, 2020 at 11:16:18AM -0700, Paul E. McKenney wrote: > > Hello! > > > > This series provides two variants of Tasks RCU, a rude variant inspired > > by Steven Rostedt's use of schedule_on_each_cpu(), and a tracing variant > > requested by the BPF folks and perhaps also of use for other tracing > > use cases. > > > > The tracing variant has explicit read-side markers to permit finite grace > > periods even given in-kernel loops in PREEMPT=n builds It also protects > > code in the idle loop, on exception entry/exit paths, and on the various > > CPU-hotplug online/offline code paths, thus having protection properties > > similar to SRCU. However, unlike SRCU, this variant avoids expensive > > instructions in the read-side primitives, thus having read-side overhead > > similar to that of preemptible RCU. > > > > There are of course downsides. The grace-period code can send IPIs to > > CPUs, even when those CPUs are in the idle loop or in nohz_full userspace. > > It is necessary to scan the full tasklist, much as for Tasks RCU. There > > is a single callback queue guarded by a single lock, again, much as for > > Tasks RCU. If needed, these downsides can be at least partially remedied > > So what we trade to fix the issues we are having with tracing against extended > grace periods, we lose in CPU isolation. That worries me a bit as tracing can > be thoroughly used with nohz_full and CPU isolation. First, disturbing nohz_full CPUs can be avoided by the sysadm simply refusing to remove tracepoints while sensitive applications are running on nohz_full CPUs. Second, for non-CPU-bound real-time programs with mostly-idle CPUs, I should be able to decrease the likelihood of sending IPIs pretty much to zero. Or am I missing something here? Thanx, Paul