On Sat, Oct 24, 2015 at 10:23:14PM -0700, Joonwoo Park wrote: > @@ -1069,7 +1069,7 @@ static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new > { > lockdep_assert_held(&rq->lock); > > - dequeue_task(rq, p, 0); > + dequeue_task(rq, p, DEQUEUE_MIGRATING); > p->on_rq = TASK_ON_RQ_MIGRATING; > set_task_cpu(p, new_cpu); > raw_spin_unlock(&rq->lock); > @@ -5656,7 +5671,7 @@ static void detach_task(struct task_struct *p, struct lb_env *env) > { > lockdep_assert_held(&env->src_rq->lock); > > - deactivate_task(env->src_rq, p, 0); > + deactivate_task(env->src_rq, p, DEQUEUE_MIGRATING); > p->on_rq = TASK_ON_RQ_MIGRATING; > set_task_cpu(p, env->dst_cpu); > } Also note that on both sites we also set TASK_ON_RQ_MIGRATING -- albeit late. Can't you simply set that earlier (and back to QUEUED later) and test for task_on_rq_migrating() instead of blowing up the fastpath like you did? -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html