On Wed, Jan 11, 2012 at 7:21 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > On Wed, 11 Jan 2012 16:50:09 -0800 > Ying Han <yinghan@xxxxxxxxxx> wrote: > >> On Wed, Jan 11, 2012 at 3:59 PM, KAMEZAWA Hiroyuki >> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: >> > On Wed, 11 Jan 2012 15:17:42 -0800 (PST) >> > Hugh Dickins <hughd@xxxxxxxxxx> wrote: >> > >> >> On Wed, 11 Jan 2012, Ying Han wrote: >> >> >> >> > We have the nr_mlock stat both in meminfo as well as vmstat system wide, this >> >> > patch adds the mlock field into per-memcg memory stat. The stat itself enhances >> >> > the metrics exported by memcg, especially is used together with "uneivctable" >> >> > lru stat. >> >> > >> >> > --- a/include/linux/page_cgroup.h >> >> > +++ b/include/linux/page_cgroup.h >> >> > @@ -10,6 +10,7 @@ enum { >> >> > /* flags for mem_cgroup and file and I/O status */ >> >> > PCG_MOVE_LOCK, /* For race between move_account v.s. following bits */ >> >> > PCG_FILE_MAPPED, /* page is accounted as "mapped" */ >> >> > + PCG_MLOCK, /* page is accounted as "mlock" */ >> >> > /* No lock in page_cgroup */ >> >> > PCG_ACCT_LRU, /* page has been accounted for (under lru_lock) */ >> >> > __NR_PCG_FLAGS, >> >> >> >> Is this really necessary? KAMEZAWA-san is engaged in trying to reduce >> >> the number of PageCgroup flags, and I expect that in due course we shall >> >> want to merge them in with Page flags, so adding more is unwelcome. >> >> I'd have thought that with memcg_ hooks in the right places, >> >> a separate flag would not be necessary? >> >> >> > >> > Please don't ;) >> > >> > NR_UNEIVCTABLE_LRU is not enough ? >> >> Seems not. >> >> The unevictable lru includes more than mlock()'d pages ( SHM_LOCK'd >> etc). There are use cases where we like to know the mlock-ed size >> per-cgroup. We used to archived that in fake-numa based container by >> reading the value from per-node meminfo, however we miss that >> information in memcg. What do you think? >> > > Hm. The # of mlocked pages can be got sum of /proc/<pid>/? ? That is tough. Then we have to do the calculation by adding up all the pids within a cgroup. > BTW, Roughly.. > > (inactive_anon + active_anon) - rss = # of unlocked shm. > > cache - (inactive_file + active_file) = total # of shm > > Then, > > (cache - (inactive_file + active_file)) - ((inactive_anon + active_anon) - rss) > = cache + rss - (sum of inactive/actige lru) > = locked shm. > > Hm, but this works only when unmapped swapcache is small ;) We might be getting a rough number. But we have use cases relying on more accurate output. Thoughts? Thanks --Ying > > Thanks, > -Kame > > > > > > > > > > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href