> On Tue, Dec 7, 2010 at 4:33 AM, Mel Gorman <mel@xxxxxxxxx> wrote: > > On Mon, Nov 29, 2010 at 10:49:42PM -0800, Ying Han wrote: > >> There is a kswapd kernel thread for each memory node. We add a different kswapd > >> for each cgroup. > > > > What is considered a normal number of cgroups in production? 10, 50, 10000? > Normally it is less than 100. I assume there is a cap of number of > cgroups can be created > per system. > > If it's a really large number and all the cgroups kswapds wake at the same time, > > the zone LRU lock will be very heavily contended. > > Thanks for reviewing the patch~ > > Agree. The zone->lru_lock is another thing we are looking at. > Eventually, we need to break the lock to > per-zone per-memcg lru. This may make following bad scenario. That's the reason why now we are using zone->lru_lock. 1) start memcg reclaim 2) found the lru tail page has pte access bit 3) memcg reclaim decided that the page move to active list of memcg-lru. Also, pte access bit was cleaned. But, the page still remain inactive list of global-lru. 4) Sadly, global reclaim discard the page quickly because it has been lost accessed bit by memcg. But, if we have to modify both memcg and global LRU, we can't avoid zone->lru_lock anyway. Then, we don't use memcg special lock. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>