> On May 7, 2022, at 11:26 AM, Rik van Riel <riel@xxxxxx> wrote: > > On Sat, 2022-05-07 at 10:46 -0700, Song Liu wrote: >> Busy kernel threads may block the transition of livepatch. Call >> klp_try_switch_task from __cond_resched to make the transition >> easier. >> > That seems like a useful idea given what we're seeing on > some systems, but I do have a nitpick with your patch :) > >> +++ b/kernel/sched/core.c >> @@ -6990,6 +6990,9 @@ SYSCALL_DEFINE0(sched_yield) >> #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) >> int __sched __cond_resched(void) >> { >> + if (unlikely(klp_patch_pending(current))) >> + klp_try_switch_task(current); >> + >> if (should_resched(0)) { >> preempt_schedule_common(); >> return 1; > > While should_resched and klp_patch_pending check the same > cache line (task->flags), now there are two separate > conditionals on this. > > Would it make sense to fold the tests for TIF_NEED_RESCHED > and TIF_PATCH_PENDING int should_resched(), and then re-do > the test for TIF_PATCH_PENDING only if should_resched() > returns true? x86 has a different version of should_resched(), so I am not quite sure what’s the right way o modify shhould_resched(). OTOH, we can probably see should_resched() as-is and just move klp_patch_pending, like int __sched __cond_resched(void) { if (should_resched(0)) { if (unlikely(klp_patch_pending(current))) klp_try_switch_task(current); preempt_schedule_common(); return 1; } #ifndef CONFIG_PREEMPT_RCU rcu_all_qs(); #endif return 0; } Given live patch user space usually waits for many seconds, I guess this should work? Thanks, Song