On Tue, Dec 22, 2009 at 4:29 PM, John David Anglin <dave@xxxxxxxxxxxxxxxxxx> wrote: > It would seem to me we have deadlock. Two threads trying to lock > the same mutex. One should win... It may be the mutex was left > locked by some other thread. Yes, it was left locked by thread #2, which then tried to acquire it again. See below. > This is the mutex: > > (gdb) p *mutex > $1 = {__data = {__lock = 2, __count = 0, __owner = 18093, __kind = 0, > __compat_padding = {0, 0, 0, 0}, __nusers = 1, {__spins = 0, __list = { > __next = 0x0}}, __reserved1 = 0, __reserved2 = 0}, > __size = "\000\000\000\002\000\000\000\000\000\000F\255", '\000' <repeats 23 times>, "\001", '\000' <repeats 11 times>, __align = 2} The __nusers field indicates that one user, namely __owner tid 18093 has already taken the lock. The mutex owner is thread #2 in your example (tid shown in your backtrace). Thread #2 is also trying to lock the mutex again and is deadlocked. The mutex is the default __kind of PTHREAD_MUTEX_TIMED_NP, but that doesn't mean it's timed, only that it supports a timed lock, if you call it using pthread_mutex_lock it will lock just like any other mutex. The lock value is 2, which indicates a private lock. Your backtrace indicates you have called __lll_lock_wait_private, which is correct for this case. Under what conditions would thread #2 have a chance to try take it's own lock again? > It's hard to see what's happening because I don't seem to be able to > single step the threads. Yeah, there are some gdb/ptrace issues I need to sort out for the newly minted nptl support. >> >> I think we need to step back from the edge and ask ourselves what >> Tcl_WaitForEvent() is trying to do with the locks. >> >> Do you know? > > No. Note thread 2 called pthread_mutex_lock from a different location. We need to determine under what conditions thread #2 would be able to take the lock again and deadlock. Cheers, Carlos. -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html