[...]
+ +void resume_remote_cpus(void) +{ + cpumask_t cpus_to_resume; + + lockdep_assert_cpus_held(); + lockdep_assert_preemption_disabled(); + + cpumask_copy(&cpus_to_resume, cpu_online_mask); + cpumask_clear_cpu(smp_processor_id(), &cpus_to_resume); + + spin_lock(&cpu_pause_lock); + + cpumask_setall(&resumed_cpus); + /* A typical example for sleep and wake-up functions. */ + smp_mb(); + while (cpumask_intersects(&cpus_to_resume, &paused_cpus)) { + sev(); + cpu_relax(); + barrier(); + }
I'm curious, is there a fundamental reason why we wait for paused CPUs to actually start running, or is it simply easier to get the implementation race-free, in particular when we have two pause_remote_cpus() calls shortly after each other and another remote CPU might still be on its way out of pause_local_cpu() from the first pause.
-- Cheers, David / dhildenb