(2011/06/20 19:20), Peter Zijlstra wrote: > On Sat, 2011-05-28 at 16:35 +0000, tip-bot for KOSAKI Motohiro wrote: >> +++ b/kernel/kthread.c >> @@ -202,8 +202,8 @@ void kthread_bind(struct task_struct *p, unsigned int cpu) >> return; >> } >> >> - p->cpus_allowed = cpumask_of_cpu(cpu); >> - p->rt.nr_cpus_allowed = 1; >> + /* It's safe because the task is inactive. */ >> + do_set_cpus_allowed(p, cpumask_of(cpu)); >> p->flags |= PF_THREAD_BOUND; >> } > > > I just happened to be staring at this stuff again, and I'm wondering > how and why this is correct. After kthread_create() the thread exists > and is exposed in the pid-hash, therefore userspace can come and do > sys_sched_setaffinity() on it, and since we're not holding any locks and > set PF_THREAD_BOUND _after_ setting cpus_allowed, things can end up > funny. > > Hmm? Can't we take just either rq lock or pi_lock? Layer violation? >From 1c0874b9157f47e22d0f6499612f0f78b830f018 Mon Sep 17 00:00:00 2001 From: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> Date: Tue, 21 Jun 2011 18:15:19 +0900 Subject: [PATCH] kthread: fix kthread_bind() race Peter Zijlstra reported kthread_bind() has a race. It doesn't hold any locks and set PF_THREAD_BOUND _after_ setting cpus_allowed. So, following race can be happen. CPU0 CPU1 ---------------------------------------------------- do_set_cpus_allowed() sys_sched_setaffinity() p->flags |= PF_THREAD_BOUND; The solution is to take either rq lock or pi_lock. They prevent a race because set_cpus_allowed_ptr() take both locks. This patch choose to use latter way. Reported-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> --- kernel/kthread.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/kernel/kthread.c b/kernel/kthread.c index 4ba7ccc..92e3083 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -196,15 +196,20 @@ EXPORT_SYMBOL(kthread_create_on_node); */ void kthread_bind(struct task_struct *p, unsigned int cpu) { + unsigned long flags; + /* Must have done schedule() in kthread() before we set_task_cpu */ if (!wait_task_inactive(p, TASK_UNINTERRUPTIBLE)) { WARN_ON(1); return; } + /* protect from a race against set_cpus_allowed_ptr() */ + raw_spin_lock_irqsave(&p->pi_lock, flags); /* It's safe because the task is inactive. */ do_set_cpus_allowed(p, cpumask_of(cpu)); p->flags |= PF_THREAD_BOUND; + raw_spin_unlock_irqrestore(&p->pi_lock, flags); } EXPORT_SYMBOL(kthread_bind); -- 1.7.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-tip-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
![]() |