On Tue, Nov 21, 2023 at 04:38:34PM -0500, Steven Rostedt wrote: > On Tue, 21 Nov 2023 13:14:16 -0800 > "Paul E. McKenney" <paulmck@xxxxxxxxxx> wrote: > > > On Tue, Nov 21, 2023 at 09:30:49PM +0100, Peter Zijlstra wrote: > > > On Tue, Nov 21, 2023 at 11:25:18AM -0800, Paul E. McKenney wrote: > > > > #define preempt_enable() \ > > > > do { \ > > > > barrier(); \ > > > > if (!IS_ENABLED(CONFIG_PREEMPT_RCU) && raw_cpu_read(rcu_data.rcu_urgent_qs) && \ > > > > (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK | HARDIRQ_MASK | NMI_MASK) == PREEMPT_OFFSET) && > > > > !irqs_disabled()) \ > > Could we make the above an else case of the below if ? Wouldn't that cause the above preempt_count() test to always fail? Another approach is to bury the test in preempt_count_dec_and_test(), but I suspect that this would not make Peter any more happy than my earlier suggestion. ;-) > > > > rcu_all_qs(); \ > > > > if (unlikely(preempt_count_dec_and_test())) { \ > > > > __preempt_schedule(); \ > > > > } \ > > > > } while (0) > > > > > > Aaaaahhh, please no. We spend so much time reducing preempt_enable() to > > > the minimal thing it is today, this will make it blow up into something > > > giant again. > > Note, the above is only true with "CONFIG_PREEMPT_RCU is not set", which > keeps the preempt_count() for preemptable kernels with PREEMPT_RCU still minimal. Agreed, and there is probably some workload that does not like this. After all, current CONFIG_PREEMPT_DYNAMIC=y booted with preempt=none would have those cond_resched() invocations. I was leary of checking dynamic information, but maybe sched_feat() is faster than I am thinking? (It should be with the static_branch, but not sure about the other two access modes.) Thanx, Paul