On Sat, Feb 11, 2012 at 08:44:45PM +0800, Wu Fengguang wrote: > <SNIP> > --- linux.orig/mm/vmscan.c 2012-02-03 21:42:21.000000000 +0800 > +++ linux/mm/vmscan.c 2012-02-11 17:28:54.000000000 +0800 > @@ -813,6 +813,8 @@ static unsigned long shrink_page_list(st > > if (PageWriteback(page)) { > nr_writeback++; > + if (PageReclaim(page)) > + congestion_wait(BLK_RW_ASYNC, HZ/10); > /* > * Synchronous reclaim cannot queue pages for > * writeback due to the possibility of stack overflow I didn't look closely at the rest of the patch, I'm just focusing on the congestion_wait part. You called this out yourself but this is in fact really really bad. If this is in place and a user copies a large amount of data to slow storage like a USB stick, the system will stall severely. A parallel streaming reader will certainly have major issues as it will enter page reclaim, find a bunch of dirty USB-backed pages at the end of the LRU (20% of memory potentially) and stall for HZ/10 on each one of them. How badly each process is affected will vary. For the OOM problem, a more reasonable stopgap might be to identify when a process is scanning a memcg at high priority and encountered all PageReclaim with no forward progress and to congestion_wait() if that situation occurs. A preferable way would be to wait until the flusher wakes up a waiter on PageReclaim pages to be written out because we want to keep moving way from congestion_wait() if at all possible. Another possibility would be to relook at LRU_IMMEDIATE but right now it requires a page flag and I haven't devised a way around that. Besides, it would only address the problem of PageREclaim pages being encountered, it would not handle the case where a memcg was filled with PageReclaim pages. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>