Re: 2.6.39-rc4+: Kernel leaking memory during FS scanning, regression?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 25 Apr 2011 14:49:33 "Paul E. McKenney" wrote:
> On Mon, Apr 25, 2011 at 02:30:02PM -0700, Linus Torvalds wrote:
> > 2011/4/25 Bruno PrÃmont <bonbons@xxxxxxxxxxxxxxxxx>:
> > >
> > > Between 1-slabinfo and 2-slabinfo some values increased (a lot) while a few
> > > ones did decrease. Don't know which ones are RCU-affected and which ones are
> > > not.
> > 
> > It really sounds as if the tiny-rcu kthread somehow just stops
> > handling callbacks. The ones that keep increasing do seem to be all
> > rcu-free'd (but I didn't really check).
> > 
> > The thing is shown as running:
> > 
> > root         6  0.0  0.0      0     0 ?        R    22:14   0:00  \_
> > [rcu_kthread]
> > 
> > but nothing seems to happen and the CPU time hasn't increased at all.
> > 
> > I dunno. Makes no  sense to me, but yeah, I'm definitely blaming
> > tiny-rcu. Paul, any ideas?
> 
> So the only ways I know for something to be runnable but not run on
> a uniprocessor are:
> 
> 1.	The CPU is continually busy with higher-priority work.
> 	This doesn't make sense in this case because the system
> 	is idle much of the time.
> 
> 2.	The system is hibernating.  This doesn't make sense, otherwise
> 	"ps" wouldn't run either.
> 
> Any others ideas on how the heck a process can get into this state?
> (I have thus far been completely unable to reproduce it.)
> 
> The process in question has a loop in rcu_kthread() in kernel/rcutiny.c.
> This loop contains a wait_event_interruptible(), waits for a global flag
> to become non-zero.
> 
> It is awakened by invoke_rcu_kthread() in that same file, which
> simply sets the flag to 1 and does a wake_up(), all with hardirqs
> disabled.
> 
> Hmmm...  One "hail mary" patch below.  What it does is make rcu_kthread
> run at normal priority rather than at real-time priority.  This is
> not for inclusion -- it breaks RCU priority boosting.  But well worth
> trying.
> 
> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
> diff --git a/kernel/rcutiny.c b/kernel/rcutiny.c
> index 0c343b9..4551824 100644
> --- a/kernel/rcutiny.c
> +++ b/kernel/rcutiny.c
> @@ -314,11 +314,15 @@ EXPORT_SYMBOL_GPL(rcu_barrier_sched);
>   */
>  static int __init rcu_spawn_kthreads(void)
>  {
> +#if 0
>  	struct sched_param sp;
> +#endif
>  
>  	rcu_kthread_task = kthread_run(rcu_kthread, NULL, "rcu_kthread");
> +#if 0
>  	sp.sched_priority = RCU_BOOST_PRIO;
>  	sched_setscheduler_nocheck(rcu_kthread_task, SCHED_FIFO, &sp);
> +#endif
>  	return 0;
>  }
>  early_initcall(rcu_spawn_kthreads);

I will give that patch a shot on Wednesday evening (European time) as I
wont have enough time in front of the affected box until then to do any
deeper testing. (same for trying to out with the other -rc kernels as
suggested by Mike)

Though I will use the few minutes I have this evening to try to fetch
kernel traces of running tasks with sysrq+t which may eventually give
us a hint at where rcu_thread is stuck/waiting.

Bruno
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux