On Mon, 2013-09-30 at 11:51 -0400, Waiman Long wrote: > On 09/28/2013 12:34 AM, Jason Low wrote: > >> Also, below is what the mcs_spin_lock() and mcs_spin_unlock() > >> functions would look like after applying the proposed changes. > >> > >> static noinline > >> void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node *node) > >> { > >> struct mcs_spin_node *prev; > >> > >> /* Init node */ > >> node->locked = 0; > >> node->next = NULL; > >> > >> prev = xchg(lock, node); > >> if (likely(prev == NULL)) { > >> /* Lock acquired. No need to set node->locked since it > >> won't be used */ > >> return; > >> } > >> ACCESS_ONCE(prev->next) = node; > >> /* Wait until the lock holder passes the lock down */ > >> while (!ACCESS_ONCE(node->locked)) > >> arch_mutex_cpu_relax(); > >> smp_mb(); > > I wonder if a memory barrier is really needed here. If the compiler can reorder the while (!ACCESS_ONCE(node->locked)) check so that the check occurs after an instruction in the critical section, then the barrier may be necessary. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>