On Mon, 15 Jan 2018, Mathieu Desnoyers wrote: > +static int membarrier_shared_expedited(void) > +{ > + int cpu; > + bool fallback = false; > + cpumask_var_t tmpmask; > + > + if (num_online_cpus() == 1) > + return 0; > + > + /* > + * Matches memory barriers around rq->curr modification in > + * scheduler. > + */ > + smp_mb(); /* system call entry is not a mb. */ > + > + /* > + * Expedited membarrier commands guarantee that they won't > + * block, hence the GFP_NOWAIT allocation flag and fallback > + * implementation. > + */ > + if (!zalloc_cpumask_var(&tmpmask, GFP_NOWAIT)) { > + /* Fallback for OOM. */ > + fallback = true; > + } > + > + cpus_read_lock(); > + for_each_online_cpu(cpu) { > + struct task_struct *p; > + > + /* > + * Skipping the current CPU is OK even through we can be > + * migrated at any point. The current CPU, at the point > + * where we read raw_smp_processor_id(), is ensured to > + * be in program order with respect to the caller > + * thread. Therefore, we can skip this CPU from the > + * iteration. > + */ > + if (cpu == raw_smp_processor_id()) > + continue; > + rcu_read_lock(); > + p = task_rcu_dereference(&cpu_rq(cpu)->curr); > + if (p && p->mm && (atomic_read(&p->mm->membarrier_state) & > + MEMBARRIER_STATE_SHARED_EXPEDITED)) { This does not make sense vs. the documentation: > + * @MEMBARRIER_CMD_SHARED_EXPEDITED: > + * Execute a memory barrier on all running threads > + * part of a process which previously registered > + * with MEMBARRIER_CMD_REGISTER_SHARED_EXPEDITED. This should say: > + * Execute a memory barrier on all running threads > + * of all processes which previously registered > + * with MEMBARRIER_CMD_REGISTER_SHARED_EXPEDITED. And I really have to ask whether this should be named _GLOBAL_ instead of _SHARED_. Hmm? Thanks, tglx -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html