> Sigh. We have sooo many problems with writeback and latency. Read > https://bugzilla.kernel.org/show_bug.cgi?id=12309 and weep. Everyone's > running away from the issue and here we are adding code to solve some > alleged stack-overflow problem which seems to be largely a non-problem, > by making changes which may worsen our real problems. This looks like some vmscan/writeback interaction issue. Firstly, the CFQ io scheduler can already prevent read IO from being delayed by lots of ASYNC write IO. See the commits 365722bb/8e2967555 in late 2009. Reading a big file in an idle system: 680897928 bytes (681 MB) copied, 15.8986 s, 42.8 MB/s Reading a big file while doing sequential writes to another file: 680897928 bytes (681 MB) copied, 27.6007 s, 24.7 MB/s 680897928 bytes (681 MB) copied, 25.6592 s, 26.5 MB/s So CFQ offers reasonable read performance under heavy writeback. Secondly, I can only feel the responsiveness lags when there are memory pressures _in addition to_ heavy writeback. cp /dev/zero /tmp No lags. usemem 1g --sleep 1000 Still no lags. usemem 1g --sleep 1000 Still no lags. usemem 1g --sleep 1000 Begin to feel lags at times. My desktop has 4G memory and no swap space. So the lags are correlated with page reclaim pressure. The above symptoms are matched very well by the patches posted by KOSAKI and me: - vmscan: raise the bar to PAGEOUT_IO_SYNC stalls - vmscan: synchronous lumpy reclaim don't call congestion_wait() However kernels as early as 2.6.18 are reported to have the problem, so there may be more hidden issues. Thanks, Fengguang -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>