On Mon, 6 Sep 2010 11:47:29 +0100 Mel Gorman <mel@xxxxxxxxx> wrote: > From: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> > > shrink_page_list() can decide to give up reclaiming a page under a > number of conditions such as > > 1. trylock_page() failure > 2. page is unevictable > 3. zone reclaim and page is mapped > 4. PageWriteback() is true > 5. page is swapbacked and swap is full > 6. add_to_swap() failure > 7. page is dirty and gfpmask don't have GFP_IO, GFP_FS > 8. page is pinned > 9. IO queue is congested > 10. pageout() start IO, but not finished > > When lumpy reclaim, all of failure result in entering synchronous lumpy > reclaim but this can be unnecessary. In cases (2), (3), (5), (6), (7) and > (8), there is no point retrying. This patch causes lumpy reclaim to abort > when it is known it will fail. > > Case (9) is more interesting. current behavior is, > 1. start shrink_page_list(async) > 2. found queue_congested() > 3. skip pageout write > 4. still start shrink_page_list(sync) > 5. wait on a lot of pages > 6. again, found queue_congested() > 7. give up pageout write again > > So, it's meaningless time wasting. However, just skipping page reclaim is > also not a good as as x86 allocating a huge page needs 512 pages for example. > It can have more dirty pages than queue congestion threshold (~=128). > > After this patch, pageout() behaves as follows; > > - If order > PAGE_ALLOC_COSTLY_ORDER > Ignore queue congestion always. > - If order <= PAGE_ALLOC_COSTLY_ORDER > skip write page and disable lumpy reclaim. > > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> > Signed-off-by: Mel Gorman <mel@xxxxxxxxx> seems nice. Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>