Re: Deadlock possibly caused by too_many_isolated.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 19 Oct 2010 09:31:42 +1100
Neil Brown <neilb@xxxxxxx> wrote:

> On Mon, 18 Oct 2010 14:58:59 -0700
> Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > On Tue, 19 Oct 2010 00:15:04 +0800
> > Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote:
> > 
> > > Neil find that if too_many_isolated() returns true while performing
> > > direct reclaim we can end up waiting for other threads to complete their
> > > direct reclaim.  If those threads are allowed to enter the FS or IO to
> > > free memory, but this thread is not, then it is possible that those
> > > threads will be waiting on this thread and so we get a circular
> > > deadlock.
> > > 
> > > some task enters direct reclaim with GFP_KERNEL
> > >   => too_many_isolated() false
> > >     => vmscan and run into dirty pages
> > >       => pageout()
> > >         => take some FS lock
> > > 	  => fs/block code does GFP_NOIO allocation
> > > 	    => enter direct reclaim again
> > > 	      => too_many_isolated() true
> > > 		=> waiting for others to progress, however the other
> > > 		   tasks may be circular waiting for the FS lock..

I'm assuming that the last four "=>"'s here should have been indented
another stop.

> > > The fix is to let !__GFP_IO and !__GFP_FS direct reclaims enjoy higher
> > > priority than normal ones, by honouring them higher throttle threshold.
> > > 
> > > Now !GFP_IOFS reclaims won't be waiting for GFP_IOFS reclaims to
> > > progress. They will be blocked only when there are too many concurrent
> > > !GFP_IOFS reclaims, however that's very unlikely because the IO-less
> > > direct reclaims is able to progress much more faster, and they won't
> > > deadlock each other. The threshold is raised high enough for them, so
> > > that there can be sufficient parallel progress of !GFP_IOFS reclaims.
> > 
> > I'm not sure that this is really a full fix.  Torsten's analysis does
> > appear to point at the real bug: raid1 has code paths which allocate
> > more than a single element from a mempool without starting IO against
> > previous elements.
> 
> ... point at "a" real bug.
> 
> I think there are two bugs here.
> The raid1 bug that Torsten mentions is certainly real (and has been around
> for an embarrassingly long time).
> The bug that I identified in too_many_isolated is also a real bug and can be
> triggered without md/raid1 in the mix.
> So this is not a 'full fix' for every bug in the kernel :-), but it could
> well be a full fix for this particular bug.
> 

Can we just delete the too_many_isolated() logic?  (Crappy comment
describes what the code does but not why it does it).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]