On Thu, Dec 9, 2010 at 6:44 AM, Mel Gorman <mel@xxxxxxxxx> wrote: > On Wed, Dec 08, 2010 at 05:23:24PM -0800, Andrew Morton wrote: >> On Wed, 8 Dec 2010 16:36:21 -0800 Simon Kirby <sim@xxxxxxxxxx> wrote: >> >> > On Wed, Dec 08, 2010 at 04:16:59PM +0100, Johannes Weiner wrote: >> > >> > > Kswapd tries to rebalance zones persistently until their high >> > > watermarks are restored. >> > > >> > > If the amount of unreclaimable pages in a zone makes this impossible >> > > for reclaim, though, kswapd will end up in a busy loop without a >> > > chance of reaching its goal. >> > > >> > > This behaviour was observed on a virtual machine with a tiny >> > > Normal-zone that filled up with unreclaimable slab objects. >> > > >> > > This patch makes kswapd skip rebalancing on such 'hopeless' zones and >> > > leaves them to direct reclaim. >> > >> > Hi! >> > >> > We are experiencing a similar issue, though with a 757 MB Normal zone, >> > where kswapd tries to rebalance Normal after an order-3 allocation while >> > page cache allocations (order-0) keep splitting it back up again. It can >> > run the whole day like this (SSD storage) without sleeping. >> >> People at google have told me they've seen the same thing. A fork is >> taking 15 minutes when someone else is doing a dd, because the fork >> enters direct-reclaim trying for an order-one page. It successfully >> frees some order-one pages but before it gets back to allocate one, dd >> has gone and stolen them, or split them apart. >> > > Is there a known test case for this or should I look at doing a > streaming-IO test with a basic workload constantly forking in the > background to measure the fork latency? We were seeing some system daemons(sshd) being OOM killed while running in the same memory container as dd test. I assume we can generate the test case while running dd on 10G of file in 1G container, at the same time running unixbench(fork/exec loop)? --Ying > >> This problem would have got worse when slub came along doing its stupid >> unnecessary high-order allocations. >> >> Billions of years ago a direct-reclaimer had a one-deep cache in the >> task_struct into which it freed the page to prevent it from getting >> stolen. >> >> Later, we took that out because pages were being freed into the >> per-cpu-pages magazine, which is effectively task-local anyway. But >> per-cpu-pages are only for order-0 pages. See slub stupidity, above. >> >> I expect that this is happening so repeatably because the >> direct-reclaimer is dong a sleep somewhere after freeing the pages it >> needs - if it wasn't doing that then surely the window wouldn't be wide >> enough for it to happen so often. But I didn't look. >> >> Suitable fixes might be >> >> a) don't go to sleep after the successful direct-reclaim. >> > > I submitted a patch for this a long time ago but at the time we didn't > have a test case that made a difference to it. Might be worth > revisiting. I can't find the related patch any more but it was fairly > trivial. If you have the patch, maybe we can give a try on our case. --Ying > > -- > Mel Gorman > Part-time Phd Student Linux Technology Center > University of Limerick IBM Dublin Software Lab > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@xxxxxxxxxx For more info on Linux MM, > see: http://www.linux-mm.org/ . > Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ > Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href