On 12/08/2010 03:00 PM, Peter Zijlstra wrote:
Anyway, complete untested and such..
Looks very promising. I've been making a few changes in the same direction (except for the fancy CFS bits) and have one way to solve the one problem you pointed out in your patch.
+void yield_to(struct task_struct *p) +{
...
+ on_rq = p->se.on_rq; + if (on_rq) + dequeue_task(p_rq, p, 0); + + ret = 0; + if (p->sched_class == curr->sched_class&& curr->sched_class->yield_to) + curr->sched_class->yield_to(p); + + if (on_rq) + enqueue_task(p_rq, p, 0);
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index c886717..8689bcd 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c
+static void yield_to_fair(struct task_stuct *p) +{ + struct sched_entity *se =¤t->se; + struct sched_entity *p_se =&p->se; + u64 lag0, p_lag0; + s64 lag, p_lag; + + lag0 = avg_vruntime(cfs_rq_of(se)); + p_lag0 = avg_vruntime(cfs_rq_of(p_se)); + + lag = se->vruntime - avg_vruntime(cfs_rq); + p_lag = p_se->vruntime - avg_vruntime(p_cfs_rq); + + if (p_lag> lag) { /* if P is owed less service */ + se->vruntime = lag0 + p_lag; + p_se->vruntime = p_lag + lag; + } + + /* + * XXX try something smarter here + */ + resched_task(p); + resched_task(current); +}
If we do the dequeue_task and enqueue_task here, we can use check_preempt_curr in yield_to_fair. Alternatively, we can do the rescheduling from the main yield_to function, not from yield_to_fair, by calling check_preempt_curr on p and current after p has been enqueued. -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html