Hi, Hannes. On Thu, Aug 26, 2010 at 08:29:04PM +0200, Johannes Weiner wrote: > On Thu, Aug 26, 2010 at 04:14:15PM +0100, Mel Gorman wrote: > > If congestion_wait() is called when there is no congestion, the caller > > will wait for the full timeout. This can cause unreasonable and > > unnecessary stalls. There are a number of potential modifications that > > could be made to wake sleepers but this patch measures how serious the > > problem is. It keeps count of how many congested BDIs there are. If > > congestion_wait() is called with no BDIs congested, the tracepoint will > > record that the wait was unnecessary. > > I am not convinced that unnecessary is the right word. On a workload > without any IO (i.e. no congestion_wait() necessary, ever), I noticed > the VM regressing both in time and in reclaiming the right pages when > simply removing congestion_wait() from the direct reclaim paths (the > one in __alloc_pages_slowpath and the other one in > do_try_to_free_pages). Not exactly same your experiment but I had a simillar experince. I had a experiement about swapout. System has lots of anon pages but almost no file pages and it already started to swap out. It means system have no memory. In this case, I forked new process which mmap some MB pages and touch the pages. It means VM should swapout some MB page for the process. And I measured the time until completing touching the pages. Sometime it's fast, sometime it's slow. time gap is almost two. Interesting thing is when it is fast, many of pages are reclaimed by kswapd. Ah.. I used swap to ramdisk and reserve the swap pages by touching before starting the experiment. So I would say it's not a _flushd_ effect. > > So just being stupid and waiting for the timeout in direct reclaim > while kswapd can make progress seemed to do a better job for that > load. > > I can not exactly pinpoint the reason for that behaviour, it would be > nice if somebody had an idea. I just thought the cause is direct reclaim just reclaims by 32 pages but kswapd could reclaim many pages by batch. But i didn't look at it any more due to busy. Does it make sense? -- Kind regards, Minchan Kim -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>