On Tue, 2011-02-01 at 09:50 -0500, Rik van Riel wrote: > +/** > + * yield_to - yield the current processor to another thread in > + * your thread group, or accelerate that thread toward the > + * processor it's on. > + * > + * It's the caller's job to ensure that the target task struct > + * can't go away on us before we can do any checks. > + * > + * Returns true if we indeed boosted the target task. > + */ > +bool __sched yield_to(struct task_struct *p, bool preempt) > +{ > + struct task_struct *curr = current; > + struct rq *rq, *p_rq; > + unsigned long flags; > + bool yielded = 0; > + > + local_irq_save(flags); > + rq = this_rq(); > + > +again: > + p_rq = task_rq(p); > + double_rq_lock(rq, p_rq); > + while (task_rq(p) != p_rq) { > + double_rq_unlock(rq, p_rq); > + goto again; > + } > + > + if (!curr->sched_class->yield_to_task) > + goto out; > + > + if (curr->sched_class != p->sched_class) > + goto out; > + > + if (task_running(p_rq, p) || p->state) > + goto out; > + > + yielded = curr->sched_class->yield_to_task(rq, p, preempt); > + > + if (yielded) { > + schedstat_inc(rq, yld_count); > + current->sched_class->yield_task(rq); > + } We can avoid this second indirect function call by > + > +out: > + double_rq_unlock(rq, p_rq); > + local_irq_restore(flags); > + > + if (yielded) > + schedule(); > + > + return yielded; > +} > +EXPORT_SYMBOL_GPL(yield_to); > +static bool yield_to_task_fair(struct rq *rq, struct task_struct *p, bool preempt) > +{ > + struct sched_entity *se = &p->se; > + > + if (!se->on_rq) > + return false; > + > + /* Tell the scheduler that we'd really like pse to run next. */ > + set_next_buddy(se); > + > + /* Make p's CPU reschedule; pick_next_entity takes care of fairness. */ > + if (preempt) > + resched_task(rq->curr); calling: yield_task_fair(rq); here. > + return true; > +} I'll make that change on commit. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html