Re: [PATCH 0/4] memcg, inode: protect page cache from freeing inode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 17, 2019 at 11:54:22AM -0500, Johannes Weiner wrote:
> CCing Dave
> 
> On Tue, Dec 17, 2019 at 08:19:08PM +0800, Yafang Shao wrote:
> > On Tue, Dec 17, 2019 at 7:56 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > > What do you mean by this exactly. Are those inodes reclaimed by the
> > > regular memory reclaim or by other means? Because shrink_node does
> > > exclude shrinking slab for protected memcgs.
> > 
> > By the regular memory reclaim, kswapd, direct reclaimer or memcg reclaimer.
> > IOW, the current->reclaim_state it set.
> > 
> > Take an example for you.
> > 
> > kswapd
> >     balance_pgdat
> >         shrink_node_memcgs
> >             switch (mem_cgroup_protected)  <<<< memory.current= 1024M
> > memory.min = 512M a file has 800M page caches
> >                 case MEMCG_PROT_NONE:  <<<< hard limit is not reached.
> >                       beak;
> >             shrink_lruvec
> >             shrink_slab <<< it may free the inode and the free all its
> > page caches (800M)

<looks at patch>

Oh, great, yet another special heuristic reclaim hack for some
whacky memcg reclaim corner case.

> This problem exists independent of cgroup protection.
> 
> The inode shrinker may take down an inode that's still holding a ton
> of (potentially active) page cache pages when the inode hasn't been
> referenced recently.

Ok, please explain to me how are those pages getting repeated
referenced and kept active without referencing the inode in some
way?

e.g. active mmap pins a struct file which pins the inode.
e.g. open fd pins a struct file which pins the inode.
e.g. open/read/write/close keeps a dentry active in cache which pins
the inode when not actively referenced by the open fd.

AFAIA, all of the cases where -file pages- are being actively
referenced require also actively referencing the inode in some way.
So why is the inode being reclaimed as an unreferenced inode at the
end of the LRU if these are actively referenced file pages?

> IMO we shouldn't be dropping data that the VM still considers hot
> compared to other data, just because the inode object hasn't been used
> as recently as other inode objects (e.g. drowned in a stream of
> one-off inode accesses).

It should not be drowned by one-off inode accesses because if
the file data is being actively referenced then there should be
frequent active references to the inode that contains the data and
that should be keeping it away from the tail of the inode LRU.

If the inode is not being frequently referenced, then it
isn't really part of the current working set of inodes, is it?

> I've carried the below patch in my private tree for testing cache
> aging decisions that the shrinker interfered with. (It would be nicer
> if page cache pages could pin the inode of course, but reclaim cannot
> easily participate in the inode refcounting scheme.)
> 
> Thoughts?
> 
> diff --git a/fs/inode.c b/fs/inode.c
> index fef457a42882..bfcaaaf6314f 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -753,7 +753,13 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
>  		return LRU_ROTATE;
>  	}
>  
> -	if (inode_has_buffers(inode) || inode->i_data.nrpages) {
> +	/* Leave the pages to page reclaim */
> +	if (inode->i_data.nrpages) {
> +		spin_unlock(&inode->i_lock);
> +		return LRU_ROTATE;
> +	}

<sigh>

Remember this?

commit 69056ee6a8a3d576ed31e38b3b14c70d6c74edcc
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Tue Feb 12 15:35:51 2019 -0800

    Revert "mm: don't reclaim inodes with many attached pages"
    
    This reverts commit a76cf1a474d7d ("mm: don't reclaim inodes with many
    attached pages").
    
    This change causes serious changes to page cache and inode cache
    behaviour and balance, resulting in major performance regressions when
    combining worklaods such as large file copies and kernel compiles.
    
      https://bugzilla.kernel.org/show_bug.cgi?id=202441
    
    This change is a hack to work around the problems introduced by changing
    how agressive shrinkers are on small caches in commit 172b06c32b94 ("mm:
    slowly shrink slabs with a relatively small number of objects").  It
    creates more problems than it solves, wasn't adequately reviewed or
    tested, so it needs to be reverted.
    
    Link: http://lkml.kernel.org/r/20190130041707.27750-2-david@xxxxxxxxxxxxx
    Fixes: a76cf1a474d7d ("mm: don't reclaim inodes with many attached pages")
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Cc: Wolfgang Walter <linux@xxxxxxx>
    Cc: Roman Gushchin <guro@xxxxxx>
    Cc: Spock <dairinin@xxxxxxxxx>
    Cc: Rik van Riel <riel@xxxxxxxxxxx>
    Cc: Michal Hocko <mhocko@xxxxxxxxxx>
    Cc: <stable@xxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>

diff --git a/fs/inode.c b/fs/inode.c
index 0cd47fe0dbe5..73432e64f874 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -730,11 +730,8 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
                return LRU_REMOVED;
        }
 
-       /*
-        * Recently referenced inodes and inodes with many attached pages
-        * get one more pass.
-        */
-       if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) {
+       /* recently referenced inodes get one more pass */
+       if (inode->i_state & I_REFERENCED) {
                inode->i_state &= ~I_REFERENCED;
                spin_unlock(&inode->i_lock);
                return LRU_ROTATE;


-- 
Dave Chinner
david@xxxxxxxxxxxxx




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux