On 09/22, Peter Zijlstra wrote: > > +static void wait_task_inactive_sched_in(struct preempt_notifier *n, int cpu) > +{ > + struct task_struct *p; > + struct wait_task_inactive_blocked *blocked = > + container_of(n, struct wait_task_inactive_blocked, notifier); > + > + hlist_del(&n->link); > + > + p = ACCESS_ONCE(blocked->waiter); > + blocked->waiter = NULL; > + wake_up_process(p); > +} > ... > +static void > +wait_task_inactive_sched_out(struct preempt_notifier *n, struct task_struct *next) > +{ > + if (current->on_rq) /* we're not inactive yet */ > + return; > + > + hlist_del(&n->link); > + n->ops = &wait_task_inactive_ops_post; > + hlist_add_head(&n->link, &next->preempt_notifiers); > +} Tricky ;) Yes, the first ->sched_out() is not enough. > unsigned long wait_task_inactive(struct task_struct *p, long match_state) > { > ... > + rq = task_rq_lock(p, &flags); > + trace_sched_wait_task(p); > + if (!p->on_rq) /* we're already blocked */ > + goto done; This doesn't look right. schedule() clears ->on_rq a long before __switch_to/etc. And it seems that we check ->on_cpu above, this is not UP friendly. > > - set_current_state(TASK_UNINTERRUPTIBLE); > - schedule_hrtimeout(&to, HRTIMER_MODE_REL); > - continue; > - } > + hlist_add_head(&blocked.notifier.link, &p->preempt_notifiers); > + task_rq_unlock(rq, p, &flags); I thought about reimplementing wait_task_inactive() too, but afaics there is a problem: why we can't race with p doing register_preempt_notifier() ? I guess register_ needs rq->lock too. Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html