On Fri, 12 Mar 2010 07:39:26 +0100 Christian Ehrhardt <ehrhardt@xxxxxxxxxxxxxxxxxx> wrote: > > > Andrew Morton wrote: > > On Mon, 8 Mar 2010 11:48:20 +0000 > > Mel Gorman <mel@xxxxxxxxx> wrote: > > > >> Under memory pressure, the page allocator and kswapd can go to sleep using > >> congestion_wait(). In two of these cases, it may not be the appropriate > >> action as congestion may not be the problem. > > > > clear_bdi_congested() is called each time a write completes and the > > queue is below the congestion threshold. > > > > So if the page allocator or kswapd call congestion_wait() against a > > non-congested queue, they'll wake up on the very next write completion. > > Well the issue came up in all kind of loads where you don't have any > writes at all that can wake up congestion_wait. > Thats true for several benchmarks, but also real workload as well e.g. A > backup job reading almost all files sequentially and pumping out stuff > via network. Why is reclaim going into congestion_wait() at all if there's heaps of clean reclaimable pagecache lying around? (I don't thing the read side of the congestion_wqh[] has ever been used, btw) > > Hence the above-quoted claim seems to me to be a significant mis-analysis and > > perhaps explains why the patchset didn't seem to help anything? > > While I might have misunderstood you and it is a mis-analysis in your > opinion, it fixes a -80% Throughput regression on sequential read > workloads, thats not nothing - its more like absolutely required :-) > > You might check out the discussion with the subject "Performance > regression in scsi sequential throughput (iozone) due to "e084b - > page-allocator: preserve PFN ordering when __GFP_COLD is set"". > While the original subject is misleading from todays point of view, it > contains a lengthy discussion about exactly when/why/where time is lost > due to congestion wait with a lot of traces, counters, data attachments > and such stuff. Well if we're not encountering lots of dirty pages in reclaim then we shouldn't be waiting for writes to retire, of course. But if we're not encountering lots of dirty pages in reclaim, we should be reclaiming pages, normally. I could understand reclaim accidentally going into congestion_wait() if it hit a large pile of pages which are unreclaimable for reasons other than being dirty, but is that happening in this case? If not, we broke it again. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>