Erik Faye-Lund <kusmabite@xxxxxxxxx> wrote: > On Wed, Oct 26, 2011 at 5:44 AM, Kyle Moffett <kyle@xxxxxxxxxxxxxxx> wrote: > > On Tue, Oct 25, 2011 at 16:51, Erik Faye-Lund <kusmabite@xxxxxxxxx> wrote: > >> On Tue, Oct 25, 2011 at 10:07 PM, Johannes Sixt <j6t@xxxxxxxx> wrote: > >>> Am 25.10.2011 17:42, schrieb Erik Faye-Lund: > >>>> On Tue, Oct 25, 2011 at 5:28 PM, Johannes Sixt <j.sixt@xxxxxxxxxxxxx> wrote: > >>>>> Am 10/25/2011 16:55, schrieb Erik Faye-Lund: > >>>>>> +int pthread_mutex_lock(pthread_mutex_t *mutex) > >>>>>> +{ > >>>>>> + [snip] > >>>>> > >>>>> The double-checked locking idiom. Very suspicious. Can you explain why it > >>> [snip] > >>> > >>> if (mutex->autoinit) { > >>> > >>> Assume two threads enter this block. > >>> > >>> if (InterlockedCompareExchange(&mutex->autoinit, -1, 1) != -1) { > >>> > >>> Only one thread, A, say on CPU A, will enter this block. > >>> > >>> InitializeCriticalSection(&mutex->cs); > >>> > >>> Thread A writes some values. Note that there are no memory barriers > >>> involved here. Not that I know of or that they would be documented. > >>> > >>> mutex->autoinit = 0; > >>> > >>> And it writes another one. Thread A continues below to contend for the > >>> mutex it just initialized. > >>> > >>> } else > >>> > >>> Meanwhile, thread B, say on CPU B, spins in this loop: > >>> > >>> while (mutex->autoinit != 0) > >>> ; /* wait for other thread */ > >>> > >>> When thread B arrives here, it sees the value of autoinit that thread A > >>> has written above. > >>> > >>> [snip] > >>> > >> > >> Thanks for pointing this out, I completely forgot about write re-ordering. > >> > >> This is indeed a problem. So, shouldn't replacing "mutex->autoinit = > >> 0;" with "InterlockedExchange(&mutex->autoinit, 0)" solve the problem? > >> InterlockedExchange generates a full memory barrier: > >> http://msdn.microsoft.com/en-us/library/windows/desktop/ms683590(v=vs.85).aspx > > > > No, I'm afraid that won't solve the issue (at least in GCC, not sure about MSVC) > > > > A write barrier in one thread is only effective if it is paired with a > > read barrier in the other thread. > > > > Since there's no read barrier in the "while(mutex->autoinit != 0)", > > you don't have any guaranteed ordering. Out of curiosity, where could re-ordering be a problem here? I'm thinking probably at "EnterCriticalSection(&mutex->cs)" and the contents of "mutex->cs" not being propagated to the waiting thread. However, shouldn't that be a non-problem, as far as compiler reordering goes, because it's an external function call and only the address of mutex->cs is passed? The only other cause I could think of is if ordering at the CPU was somehow different (it could be if there're no special provisions for calling external functions) or if "InterlockedExchange(&mutex->autoinit, 0)" wasn't atomic in updating autoinit and doing the memory barrier. Either way, I couldn't vouch for the safety of the above logic without a memory barrier so this question is purely of an academical nature. :) > > I guess if MSVC assumes that volatile reads imply barriers then it might work... > > OK, so I should probably do something like this instead? > > while (InterlockedCompareExchange(&mutex->autoinit, 0, 0) != 0) > ; /* wait for other thread */ Technically, assuming only the updating of "mutex->cs" is in question, the ICE should only be required once after exiting the loop... There's a question of the propagation of the value of "mutex->autoinit" itself, but my take is that the memory barrier on the writing thread will push out the updated value across all CPUs, thus preventing an infinite loop. The other factors, value caching and loop optimization by the compiler, should be prevented by the "volatile" keyword even with gcc or MSVC 2003. > I really appreciate getting some extra eyes on this, thanks. > Concurrent programming is not my strong-suit (as this exercise has > shown) ;) So would I. :) -- Atsushi Nakagawa <atnak@xxxxxxxxx> Changes are made when there is inconvenience. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html