On Sun, Mar 15, 2015 at 11:49:07PM +0200, Matthias Bonne wrote: > So both mutex_trylock() and mutex_unlock() always use the slow paths. > The slowpath for mutex_unlock() is __mutex_unlock_slowpath(), which > simply calls __mutex_unlock_common_slowpath(), and the latter starts > like this: > > /* > * As a performance measurement, release the lock before doing other > * wakeup related duties to follow. This allows other tasks to > acquire > * the lock sooner, while still handling cleanups in past unlock > calls. > * This can be done as we do not enforce strict equivalence between > the > * mutex counter and wait_list. > * > * > * Some architectures leave the lock unlocked in the fastpath > failure > * case, others need to leave it locked. In the later case we have > to > * unlock it here - as the lock counter is currently 0 or negative. > */ > if (__mutex_slowpath_needs_to_unlock()) > atomic_set(&lock->count, 1); > > spin_lock_mutex(&lock->wait_lock, flags); > [...] > > So the counter is set to 1 before taking the spinlock, which I think > might cause the race. Did I miss something? Yes, you miss the fact that __mutex_slowpath_needs_to_unlock() is 0 for the CONFIG_DEBUG_MUTEXES case: #ifdef CONFIG_DEBUG_MUTEXES # include "mutex-debug.h" # include <asm-generic/mutex-null.h> /* * Must be 0 for the debug case so we do not do the unlock outside of the * wait_lock region. debug_mutex_unlock() will do the actual unlock in this * case. */ # undef __mutex_slowpath_needs_to_unlock # define __mutex_slowpath_needs_to_unlock() 0 _______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies