On Thu, May 19, 2011 at 1:01 AM, Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:
* Ying Han <yinghan@xxxxxxxxxx> [2011-05-18 17:55:11]:
That seems like a good idea, so +1 for we need to do this.
> The new API exports numa_maps per-memcg basis. This is a piece of useful
> information where it exports per-memcg page distribution across real numa
> nodes.
>
> One of the usecase is evaluating application performance by combining this
> information w/ the cpu allocation to the application.
>
> The output of the memory.numastat tries to follow w/ simiar format of numa_maps
> like:
>
> total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ...
> file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ...
> anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ...
>
Thanks for the +1 :)
Can you see if the total is greater or lesser than the actual value?
> $ cat /dev/cgroup/memory/memory.numa_stat
> total=317674 N0=101850 N1=72552 N2=30120 N3=113142
> file=288219 N0=98046 N1=59220 N2=23578 N3=107375
> anon=25699 N0=3804 N1=10124 N2=6540 N3=5231
>
> Note: I noticed <total pages> is not equal to the sum of the rest of counters.
> I might need to change the way get that counter, comments are welcomed.
>
Do you have any pages mlocked?
As i replied Daisuke, i think the problem is some pages charged to the memcg might not on the LRU.
--Ying
--
> change v2..v1:
> 1. add also the file and anon pages on per-node distribution.
>
> Signed-off-by: Ying Han <yinghan@xxxxxxxxxx>
> ---
> mm/memcontrol.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 files changed, 109 insertions(+), 0 deletions(-)
>
Three Cheers,
Balbir