Re: [PATCH] avoid race condition in pick_next_task_fair in kernel/sched_fair.c

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2010-12-23 at 13:12 +0100, Peter Zijlstra wrote:
> On Thu, 2010-12-23 at 10:08 +0800, Yong Zhang wrote:
> > > systemd--1251    0d..5. 2015398us : enqueue_task_fair <-enqueue_task
> > > systemd--1251    0d..5. 2015398us : print_runqueue <-enqueue_task_fair
> > > systemd--1251    0d..5. 2015399us : __print_runqueue:  cfs_rq: c2407c34, nr: 3, load: 3072
> > > systemd--1251    0d..5. 2015400us : __print_runqueue:  curr: f6a8de5c, comm: systemd-cgroups/1251, load: 1024
> > > systemd--1251    0d..5. 2015401us : __print_runqueue:  se: f69e6300, load: 1024,
> > > systemd--1251    0d..5. 2015401us : __print_runqueue:    cfs_rq: f69e6540, nr: 2, load: 2048
> > > systemd--1251    0d..5. 2015402us : __print_runqueue:    curr: (null)
> > > systemd--1251    0d..5. 2015402us : __print_runqueue:    se: f69e65a0, load: 4137574976,
> > 
> > the load == f69e65a0 == address of se, odd
> 
> This appears to be consistently true, I've also found that in between
> these two prints, there is a free_sched_group() freeing that exact
> entry. So post-print is a use-after-free artifact.
> 
> What's interesting is that its freeing a cfs_rq struct with
> nr_running=1, that should not be possible...
> 
> /me goes stare at the whole cgroup task attach vs cgroup destruction
> muck.

 systemd-1       0d..1. 2070793us : sched_destroy_group: se: f69e43c0, load: 1024
 systemd-1       0d..1. 2070794us : sched_destroy_group: cfs_rq: f69e4720, nr: 1, load: 1024
 systemd-1       0d..1. 2070794us : __print_runqueue:  cfs_rq: f69e4720, nr: 1, load: 1024
 systemd-1       0d..1. 2070795us : __print_runqueue:  curr: (null)
 systemd-1       0d..1. 2070796us : __print_runqueue:  se: f6a8eb4c, comm: systemd-tmpfile/1243, load: 1024
 systemd-1       0d..1. 2070796us : _raw_spin_unlock_irqrestore <-sched_destroy_group

So somehow it manages to destroy a group with a task attached.
--
To unsubscribe from this list: send the line "unsubscribe kernel-janitors" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Kernel Development]     [Kernel Announce]     [Kernel Newbies]     [Linux Networking Development]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Device Mapper]

  Powered by Linux