On Mon, Jan 30, 2023 at 01:40:18PM +0100, Peter Zijlstra wrote: > On Fri, Jan 27, 2023 at 02:11:31PM -0800, Josh Poimboeuf wrote: > > @@ -8500,8 +8502,10 @@ EXPORT_STATIC_CALL_TRAMP(might_resched); > > static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched); > > int __sched dynamic_cond_resched(void) > > { > > - if (!static_branch_unlikely(&sk_dynamic_cond_resched)) > > + if (!static_branch_unlikely(&sk_dynamic_cond_resched)) { > > + klp_sched_try_switch(); > > return 0; > > + } > > return __cond_resched(); > > } > > EXPORT_SYMBOL(dynamic_cond_resched); > > I would make the klp_sched_try_switch() not depend on > sk_dynamic_cond_resched, because __cond_resched() is not a guaranteed > pass through __schedule(). > > But you'll probably want to check with Mark here, this all might > generate crap code on arm64. IIUC here klp_sched_try_switch() is a static call, so on arm64 this'll generate at least a load, a conditional branch, and an indirect branch. That's not ideal, but I'd have to benchmark it to find out whether it's a significant overhead relative to the baseline of PREEMPT_DYNAMIC. For arm64 it'd be a bit nicer to have another static key check, and a call to __klp_sched_try_switch(). That way the static key check gets turned into a NOP in the common case, and the call to __klp_sched_try_switch() can be a direct call (potentially a tail-call if we made it return 0). Thanks, Mark.