On Wed, Dec 14, 2011 at 03:41:33PM +0000, Mel Gorman wrote: > It was observed that scan rates from direct reclaim during tests > writing to both fast and slow storage were extraordinarily high. The > problem was that while pages were being marked for immediate reclaim > when writeback completed, the same pages were being encountered over > and over again during LRU scanning. > > This patch isolates file-backed pages that are to be reclaimed when > clean on their own LRU list. Excuse me if I sound like a broken record, but have those observations of high scan rates persisted with the per-zone dirty limits patchset? In my tests with pzd, the scan rates went down considerably together with the immediate reclaim / vmscan writes. Our dirty limits are pretty low - if reclaim keeps shuffling through dirty pages, where are the 80% reclaimable pages?! To me, this sounds like the unfair distribution of dirty pages among zones again. Is there are a different explanation that I missed? PS: It also seems a bit out of place in this series...? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>