[PATCH v5 0/3] protect page cache from freeing inode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On my server there're some running MEMCGs protected by memory.{min, low},
but I found the usage of these MEMCGs abruptly became very small, which
were far less than the protect limit. It confused me and finally I
found that was because of inode stealing.
Once an inode is freed, all its belonging page caches will be dropped as
well, no matter how may page caches it has. So if we intend to protect the
page caches in a memcg, we must protect their host (the inode) first.
Otherwise the memcg protection can be easily bypassed with freeing inode,
especially if there're big files in this memcg.

The inherent mismatch between memcg and inode is a trouble. One inode can
be shared by different MEMCGs, but it is a very rare case. If an inode is
shared, its belonging page caches may be charged to different MEMCGs.
Currently there's no perfect solution to fix this kind of issue, but the
inode majority-writer ownership switching can help it more or less.

After this patch, it may take extra time to skip these inodes when
workload outside of a memcg protected by memory.min or memory.low is
trying to do page reclaim, especially if there're lots of inodes pinned
by pagecache in this protected memcg. In order to measure the potential
regressions, I constructed bellow test case on my server.
My server is a machine with two nodes, and each of these nodes has 64GB
memory. I created two memcgs, and memory.low of these memcgs are both set
with 1G. Then I generated more than 500 thousand inodes in each of them,
and pagacaches of these inodes are from 4K to 4M. IOW, there're totally
more than 1 million xfs_inode in the memory and the total pagecache of
them are nearly 128GB. Then I run a workload outside of these two
protected memcgs. That workload is usemem in Mel's mmtests with a little
modification to alloc almost all the memory and iterate only once.
Bellow is the compared result of the Amean of elapsed time and sys%.

                               5.6.0-rc4               patched
Amean     syst-4        65.75 (   0.00%)       68.08 *  -3.54%*
Amean     elsp-4        32.14 (   0.00%)       32.63 *  -1.52%*
Amean     syst-7        67.47 (   0.00%)       66.71 *   1.13%*
Amean     elsp-7        19.83 (   0.00%)       18.41 *   7.16%*
Amean     syst-12       98.27 (   0.00%)       99.29 *  -1.04%*
Amean     elsp-12       15.60 (   0.00%)       16.00 *  -2.56%*
Amean     syst-21      174.69 (   0.00%)      172.92 *   1.01%*
Amean     elsp-21       14.63 (   0.00%)       14.75 *  -0.82%*
Amean     syst-30      195.78 (   0.00%)      205.90 *  -5.17%*
Amean     elsp-30       12.42 (   0.00%)       12.73 *  -2.50%*
Amean     syst-40      249.85 (   0.00%)      250.81 *  -0.38%*
Amean     elsp-40       12.19 (   0.00%)       12.25 *  -0.49%*

I did many times. Each time I run this test, I got different result. But
the differece is not too big.

Furthmore, this behavior only occurs when memory.min or memory.low is
set, and the user already knows that memory.{min, low} can protect the
pages at the cost of taking more CPU times, so small extra time is
expected by the user. 

While if the workload trying to reclaim these protected inodes is inside of
a protected memcg, then this workload will not be effected at all
because memory.{min, low} doesn't take effect under these condition.  

- Changes against v4:
Update with the test result to measure the potential regression.
And rebase this patchset on 5.6.0-rc4.

- Changes against v3:
Fix the possible risk pointed by Johannes in another patchset [1].
Per discussion with Johannes in that mail thread, I found that the issue
Johannes is trying to fix is different with the issue I'm trying to fix.
That's why I update this patchset and post it again. This specific memcg
protection issue should be addressed.

- Changes against v2:
    1. Seperates memcg patches from this patchset, suggested by Roman.
    2. Improves code around the usage of for_each_mem_cgroup(), suggested
       by Dave
    3. Use memcg_low_reclaim passed from scan_control, instead of
       introducing a new member in struct mem_cgroup.
    4. Some other code improvement suggested by Dave.


- Changes against v1:
Use the memcg passed from the shrink_control, instead of getting it from
inode itself, suggested by Dave. That could make the laying better.

[1]. https://lore.kernel.org/linux-mm/20200211175507.178100-1-hannes@xxxxxxxxxxx/

Yafang Shao (3):
  mm, list_lru: make memcg visible to lru walker isolation function
  mm, shrinker: make memcg low reclaim visible to lru walker isolation
    function
  inode: protect page cache from freeing inode

 fs/inode.c                 | 76 ++++++++++++++++++++++++++++++++++++--
 include/linux/memcontrol.h | 21 +++++++++++
 include/linux/shrinker.h   |  3 ++
 mm/list_lru.c              | 47 +++++++++++++----------
 mm/memcontrol.c            | 15 --------
 mm/vmscan.c                | 27 ++++++++------
 6 files changed, 141 insertions(+), 48 deletions(-)

-- 
2.18.1




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux