Paul E. McKenney <paulmck@xxxxxxxxxx> writes: > On Wed, Oct 18, 2023 at 03:16:12PM +0200, Thomas Gleixner wrote: >> Paul! >> >> On Tue, Oct 17 2023 at 18:03, Paul E. McKenney wrote: >> > Belatedly calling out some RCU issues. Nothing fatal, just a >> > (surprisingly) few adjustments that will need to be made. The key thing >> > to note is that from RCU's viewpoint, with this change, all kernels >> > are preemptible, though rcu_read_lock() readers remain >> > non-preemptible. >> >> Why? Either I'm confused or you or both of us :) > > Isn't rcu_read_lock() defined as preempt_disable() and rcu_read_unlock() > as preempt_enable() in this approach? I certainly hope so, as RCU > priority boosting would be a most unwelcome addition to many datacenter > workloads. No, in this approach, PREEMPT_AUTO selects PREEMPTION and thus PREEMPT_RCU so rcu_read_lock/unlock() would touch the rcu_read_lock_nesting. Which is identical to what PREEMPT_DYNAMIC does. >> With this approach the kernel is by definition fully preemptible, which >> means means rcu_read_lock() is preemptible too. That's pretty much the >> same situation as with PREEMPT_DYNAMIC. > > Please, just no!!! > > Please note that the current use of PREEMPT_DYNAMIC with preempt=none > avoids preempting RCU read-side critical sections. This means that the > distro use of PREEMPT_DYNAMIC has most definitely *not* tested preemption > of RCU readers in environments expecting no preemption. Ah. So, though PREEMPT_DYNAMIC with preempt=none runs with PREEMPT_RCU, preempt=none stubs out the actual preemption via __preempt_schedule. Okay, I see what you are saying. (Side issue: but this means that even for PREEMPT_DYNAMIC preempt=none, _cond_resched() doesn't call rcu_all_qs().) >> For throughput sake this fully preemptible kernel provides a mechanism >> to delay preemption for SCHED_OTHER tasks, i.e. instead of setting >> NEED_RESCHED the scheduler sets NEED_RESCHED_LAZY. >> >> That means the preemption points in preempt_enable() and return from >> interrupt to kernel will not see NEED_RESCHED and the tasks can run to >> completion either to the point where they call schedule() or when they >> return to user space. That's pretty much what PREEMPT_NONE does today. >> >> The difference to NONE/VOLUNTARY is that the explicit cond_resched() >> points are not longer required because the scheduler can preempt the >> long running task by setting NEED_RESCHED instead. >> >> That preemption might be suboptimal in some cases compared to >> cond_resched(), but from my initial experimentation that's not really an >> issue. > > I am not (repeat NOT) arguing for keeping cond_resched(). I am instead > arguing that the less-preemptible variants of the kernel should continue > to avoid preempting RCU read-side critical sections. [ snip ] >> In the end there is no CONFIG_PREEMPT_XXX anymore. The only knob >> remaining would be CONFIG_PREEMPT_RT, which should be renamed to >> CONFIG_RT or such as it does not really change the preemption >> model itself. RT just reduces the preemption disabled sections with the >> lock conversions, forced interrupt threading and some more. > > Again, please, no. > > There are situations where we still need rcu_read_lock() and > rcu_read_unlock() to be preempt_disable() and preempt_enable(), > repectively. Those can be cases selected only by Kconfig option, not > available in kernels compiled with CONFIG_PREEMPT_DYNAMIC=y. As far as non-preemptible RCU read-side critical sections are concerned, are the current - PREEMPT_DYNAMIC=y, PREEMPT_RCU, preempt=none config (rcu_read_lock/unlock() do not manipulate preempt_count, but do stub out preempt_schedule()) - and PREEMPT_NONE=y, TREE_RCU config (rcu_read_lock/unlock() manipulate preempt_count)? roughly similar or no? >> > I am sure that I am missing something, but I have not yet seen any >> > show-stoppers. Just some needed adjustments. >> >> Right. If it works out as I think it can work out the main adjustments >> are to remove a large amount of #ifdef maze and related gunk :) > > Just please don't remove the #ifdef gunk that is still needed! Always the hard part :). Thanks -- ankur