proc-meminfo: Why the Mapped be much higher than Active(file) + Inactive(file) ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi dear memory experts,

Currently we encountered a hibernation problem that,
the number of global_page_state(NR_FILE_MAPPED) is much higher
than global_page_state(NR_INACTIVE_ANON) + global_page_state(NR_ACTIVE_FILE)
which causes unexpected behavior when calculating the reclaimable
number of pages (https://bugzilla.kernel.org/show_bug.cgi?id=97201):

~> cat /proc/meminfo
MemTotal:       11998028 kB
MemFree:         7592344 kB
MemAvailable:    7972260 kB
Buffers:          229960 kB
Cached:           730140 kB
SwapCached:       133868 kB
Active:          1256224 kB
Inactive:         599452 kB
Active(anon):     904904 kB
Inactive(anon):   436112 kB
Active(file):     351320 kB
Inactive(file):   163340 kB
Unevictable:          60 kB
Mlocked:              60 kB
SwapTotal:      10713084 kB
SwapFree:        9850232 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:        847876 kB
Mapped:          2724140 kB        //very big...
Shmem:            445440 kB
Slab:             129984 kB
SReclaimable:      68368 kB
SUnreclaim:        61616 kB
KernelStack:        8128 kB
PageTables:        53692 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16712096 kB
Committed_AS:    6735376 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      578084 kB
VmallocChunk:   34359117432 kB
HardwareCorrupted:     0 kB
AnonHugePages:    276480 kB
HugePages_Total:       0
HugePages_Free:        0

Due to my lacking knowledge on memory management,
I don't know the reason why Mapped is much bigger than
the sum of Active(file) and Inactive(file), so the trace util is
enabled to track the increment of Mapped(target: page_add_file_rmap):

[root@localhost tracing]# pwd
/sys/kernel/debug/tracing
[root@localhost tracing]# echo page_add_file_rmap > set_ftrace_filter
 [root@localhost tracing]# echo function > current_tracer
 [root@localhost tracing]# echo 1 > options/func_stack_trace
//start virtual box, our testing process
 [root@localhost tracing]# cat trace > /home/tracer_nr_mapped.log
[root@localhost tracing]# echo 0 > options/func_stack_trace
[root@localhost tracing]#  echo > set_ftrace_filter
 [root@localhost tracing]# echo 0 > tracing_on

The result shows that, most of increment occur in the following path:
      VirtualBox-3151  [000] ...1   523.775961: page_add_file_rmap <-do_set_pte
      VirtualBox-3151  [000] ...1   523.775963: <stack trace>
 => update_curr
 => page_add_file_rmap
 => put_prev_entity
 => page_add_file_rmap
 => do_set_pte
 => filemap_map_pages
 => do_read_fault.isra.61
 => handle_mm_fault
 => get_futex_key
 => hrtimer_wakeup
 => __do_page_fault
 => do_futex
 => do_page_fault
 => page_fault

So it is filemap_map_pages.

Firstly, filemap_map_pages only considers the pages already
in the page cache tree, 
secondly, all the pages in a page cache tree have previously
been added  to inactive list after finished the on-demand fault,
filemap_fault -> find_get_page,
thirdly, the pages caches are moved between inactive-lru and active-lru
(plus mem cgroup lru) , 
So, the total number of Active(file) and Inactive(file)
should be bigger than Mapped, why the latter 
is bigger than the latters in our environment?

I'm not sure if I understand the code correctly, could you guys
please give me some advice/suggestion on why this happeded? 
thanks in advance.



Yu


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]