On Tue, Dec 7, 2010 at 5:28 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > On Tue, 7 Dec 2010 17:24:12 -0800 > Ying Han <yinghan@xxxxxxxxxx> wrote: > >> On Tue, Dec 7, 2010 at 4:39 PM, KAMEZAWA Hiroyuki >> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: >> > On Tue, 7 Dec 2010 09:28:01 -0800 >> > Ying Han <yinghan@xxxxxxxxxx> wrote: >> > >> >> On Tue, Dec 7, 2010 at 4:33 AM, Mel Gorman <mel@xxxxxxxxx> wrote: >> > >> >> Potentially there will >> >> > also be a very large number of new IO sources. I confess I haven't read the >> >> > thread yet so maybe this has already been thought of but it might make sense >> >> > to have a 1:N relationship between kswapd and memcgroups and cycle between >> >> > containers. The difficulty will be a latency between when kswapd wakes up >> >> > and when a particular container is scanned. The closer the ratio is to 1:1, >> >> > the less the latency will be but the higher the contenion on the LRU lock >> >> > and IO will be. >> >> >> >> No, we weren't talked about the mapping anywhere in the thread. Having >> >> many kswapd threads >> >> at the same time isn't a problem as long as no locking contention ( >> >> ext, 1k kswapd threads on >> >> 1k fake numa node system). So breaking the zone->lru_lock should work