21.08.2013, 21:24, "Sebastian Andrzej Siewior" <bigeasy@xxxxxxxxxxxxx>: > Now that I looked at it for a while, would this fix your trouble? > > diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c > --- a/kernel/rtmutex.c > +++ b/kernel/rtmutex.c > @@ -724,6 +724,7 @@ static void noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock) > struct task_struct *lock_owner, *self = current; > struct rt_mutex_waiter waiter, *top_waiter; > int ret; > + int new_state; > > rt_mutex_init_waiter(&waiter, true); > > @@ -744,8 +745,11 @@ static void noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock) > * try_to_wake_up(). > */ > pi_lock(&self->pi_lock); > + new_state = TASK_UNINTERRUPTIBLE; > + if (task_is_traced(self)) > + new_state |= __TASK_TRACED; > self->saved_state = self->state; > - __set_current_state(TASK_UNINTERRUPTIBLE); > + __set_current_state(new_state); > pi_unlock(&self->pi_lock); > > ret = task_blocks_on_rt_mutex(lock, &waiter, self, 0); > > This should avoid that the trace state is lost while waiting on that > mutex and the checks for "is traced" may remain the same. This was the first thing I tried but it did not work. ptrace_check_attach() passes TASK_TRACED to wait_task_inactive() which tests for equality: unsigned long wait_task_inactive(struct task_struct *p, long match_state) < ... > if (match_state && unlikely(p->state != match_state) && unlikely(p->saved_state != match_state)) { But this test will fail because we've added TASK_UNINTERRUPTIBLE and possibly removed some other bits from p->state. So I decided to add pi_lock locking instead - it is slower but it works with 100% guarantee. Also adding __TASK_TRACED does not help with all the other places where saved_state is checked and more races might still be hidden. -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html