> On Thu, Apr 15, 2010 at 01:09:01PM +0900, KOSAKI Motohiro wrote: > > Hi > > > > > How about this? For now, we stop direct reclaim from doing writeback > > > only on order zero allocations, but allow it for higher order > > > allocations. That will prevent the majority of situations where > > > direct reclaim blows the stack and interferes with background > > > writeout, but won't cause lumpy reclaim to change behaviour. > > > This reduces the scope of impact and hence testing and validation > > > the needs to be done. > > > > Tend to agree. but I would proposed slightly different algorithm for > > avoind incorrect oom. > > > > for high order allocation > > allow to use lumpy reclaim and pageout() for both kswapd and direct reclaim > > SO same as current. Yes. as same as you propsed. > > > for low order allocation > > - kswapd: always delegate io to flusher thread > > - direct reclaim: delegate io to flusher thread only if vm pressure is low > > IMO, this really doesn't fix either of the problems - the bad IO > patterns nor the stack usage. All it will take is a bit more memory > pressure to trigger stack and IO problems, and the user reporting the > problems is generating an awful lot of memory pressure... This patch doesn't care stack usage. because - again, I think all stack eater shold be diet. - under allowing lumpy reclaim world, only deny low order reclaim doesn't solve anything. Please don't forget priority=0 recliam failure incvoke OOM-killer. I don't imagine anyone want it. And, Which IO workload trigger <6 priority vmscan? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>