Re: [Bug #14141] order 2 page allocation failures in iwlagn

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 27, 2009 at 03:52:24PM +0000, Mel Gorman wrote:
> 
> > So, after the move to async/sync, a lot more pages are getting queued
> > for writeback - more than three times the number of pages are queued for
> > writeback with the vanilla kernel. This amount of congestion might be why
> > direct reclaimers and kswapd's timings have changed so much.
> > 
> 
> Or more accurately, the vanilla kernel has queued up a lot more pages for
> IO than when the patch is reverted. I'm not seeing yet why this is.

[ sympathies over confusion about congestion...lots of variables here ]

If wb_kupdate has been able to queue more writes it is because the
congestion logic isn't stopping it.  We have congestion_wait(), but
before calling that in the writeback paths it says: are you congested?
and then backs off if the answer is yes.

Ideally, direct reclaim will never do writeback.  We want it to be able
to find clean pages that kupdate and friends have already processed.

Waiting for congestion is a funny thing, it only tells us the device has
managed to finish some IO or that a timeout has passed.  Neither event has
any relation to figuring out if the IO for reclaimable pages has
finished.

One option is to have the VM remember the hashed waitqueue for one of
the pages it direct reclaims and then wait on it.

-chris

--
To unsubscribe from this list: send the line "unsubscribe kernel-testers" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux