On Fri, 11 Jun 2010 21:44:11 +0100 Mel Gorman <mel@xxxxxxxxx> wrote: > > Well. The main problem is that we're doing too much IO off the LRU of > > course. > > > > What would be considered "too much IO"? Enough to slow things down ;) This problem used to hurt a lot. Since those times we've decreased the default value of /proc/sys/vm/dirty*ratio by a lot, which surely papered over this problem a lot. We shouldn't forget that those ratios _are_ tunable, after all. If we make a change which explodes the kernel when someone's tuned to 40% then that's a problem and we'll need to scratch our heads over the magnitude of that problem. As for a workload which triggers the problem on a large machine which is tuned to 20%/10%: dunno. If we're reliably activating pages when dirtying them then perhaps it's no longer a problem with the default tuning. I'd do some testing with mem=256M though - that has a habit of triggering weirdnesses. btw, I'm trying to work out if zap_pte_range() really needs to run set_page_dirty(). Didn't (pte_dirty() && !PageDirty()) pages get themselves stamped out? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html