On Fri, Oct 21, 2016 at 01:35:14PM -0700, Shaohua Li wrote: > In our systems, proc/sysfs inode/dentry cache use more than 1G memory > even memory pressure is high sometimes. Since proc/sysfs is in-memory > filesystem, rebuilding the cache is fast. There is no point proc/sysfs > and disk fs have equal pressure for slab shrink. > > One idea is directly discarding proc/sysfs inode/dentry cache rightly > after the proc/sysfs file is closed. But the discarding will make > proc/sysfs file open slower next time, which is 20x slower in my test if > multiple applications are accessing proc files. This patch doesn't go > that far. Instead, just put more pressure to shrink proc/sysfs slabs. > > Signed-off-by: Shaohua Li <shli@xxxxxx> > --- > fs/kernfs/mount.c | 2 ++ > fs/proc/inode.c | 2 ++ > 2 files changed, 4 insertions(+) > > diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c > index d5b149a..5b4e747 100644 > --- a/fs/kernfs/mount.c > +++ b/fs/kernfs/mount.c > @@ -161,6 +161,8 @@ static int kernfs_fill_super(struct super_block *sb, unsigned long magic) > sb->s_xattr = kernfs_xattr_handlers; > sb->s_time_gran = 1; > > + sb->s_shrink.seeks = 1; > + sb->s_shrink.batch = 0; This sort of thing needs comments as to why they are being changed. Otherwise the next person who comes along to do shrinker modifications won't have a clue about why this magic exists. Also, I don't think s_shrink.batch = 0 does what you think it does. The superblock batch size default of 1024 is more efficient than setting sb->s_shrink.batch = 0 as that makes the shrinker use SHRINK_BATCH: #define SHRINK_BATCH 128 i.e. it does less work per batch so has more overhead.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html