On Wed, Oct 06, 2021 at 12:29:32PM +0200, Petr Mladek wrote: > On Wed 2021-10-06 11:04:26, Peter Zijlstra wrote: > > So it needs to be something like: > > > > > > CPU0 CPU1 > > > > <user> > > > > if (context_tracking_set_cpu_work(task_cpu(), CT_WORK_KLP)) > > > > <kernel-entry> > > klp_update_patch_state klp_update_patch_state() > > > > > > So that CPU0 and CPU1 race to complete klp_update_patch_state() *before* > > any regular (!noinstr) code gets run. > > Grr, you are right. I thought that we migrated the task when entering > kernel even before. But it seems that we do it only when leaving > the kernel in exit_to_user_mode_loop(). Yep... :-) > > Which then means it needs to look something like: > > > > noinstr void klp_update_patch_state(struct task_struct *task) > > { > > struct thread_info *ti = task_thread_info(task); > > > > preempt_disable_notrace(); > > if (arch_test_bit(TIF_PATCH_PENDING, (unsigned long *)&ti->flags)) { > > /* > > * Order loads of TIF_PATCH_PENDING vs klp_target_state. > > * See klp_init_transition(). > > */ > > smp_rmb(); > > task->patch_state = __READ_ONCE(klp_target_state); > > /* > > * Concurrent against self; must observe updated > > * task->patch_state if !TIF_PATCH_PENDING. > > */ > > smp_mb__before_atomic(); > > IMHO, smp_wmb() should be enough. We are here only when this > CPU set task->patch_state right above. So that CPU running > this code should see the correct task->patch_state. Yes, I think smp_wmb() and smp_mb__before_atomic() are NOPS for all the same architectures, so that might indeed be a better choice. > The read barrier is needed only when @task is entering kernel and > does not see TIF_PATCH_PENDING. It is handled by smp_rmb() in > the "else" branch below. > > It is possible that both CPUs see TIF_PATCH_PENDING and both > set task->patch_state. But it should not cause any harm > because they set the same value. Unless something really > crazy happens with the internal CPU busses and caches. Right, not our problem :-) Lots would be broken beyond repair in that case. > > arch_clear_bit(TIF_PATCH_PENDING, (unsigned long *)&ti->flags); > > } else { > > /* > > * Concurrent against self, see smp_mb__before_atomic() > > * above. > > */ > > smp_rmb(); > > Yeah, this is the counter part against the above smp_wmb(). > > > } > > preempt_enable_notrace(); > > } > > Now, I am scared to increase my paranoia level and search for even more > possible races. I feel overwhelmed at the moment ;-) :-) Anyway, I still need to figure out how to extract this context tracking stuff from RCU and not make a giant mess of things, so until that time....