пт, 26 окт. 2018 г. в 18:57, Roman Gushchin <guro@xxxxxx>: > > On Fri, Oct 26, 2018 at 10:57:35AM +0200, Michal Hocko wrote: > > Spock doesn't seem to be cced here - fixed now > > > > On Tue 23-10-18 16:43:29, Roman Gushchin wrote: > > > Spock reported that the commit 172b06c32b94 ("mm: slowly shrink slabs > > > with a relatively small number of objects") leads to a regression on > > > his setup: periodically the majority of the pagecache is evicted > > > without an obvious reason, while before the change the amount of free > > > memory was balancing around the watermark. > > > > > > The reason behind is that the mentioned above change created some > > > minimal background pressure on the inode cache. The problem is that > > > if an inode is considered to be reclaimed, all belonging pagecache > > > page are stripped, no matter how many of them are there. So, if a huge > > > multi-gigabyte file is cached in the memory, and the goal is to > > > reclaim only few slab objects (unused inodes), we still can eventually > > > evict all gigabytes of the pagecache at once. > > > > > > The workload described by Spock has few large non-mapped files in the > > > pagecache, so it's especially noticeable. > > > > > > To solve the problem let's postpone the reclaim of inodes, which have > > > more than 1 attached page. Let's wait until the pagecache pages will > > > be evicted naturally by scanning the corresponding LRU lists, and only > > > then reclaim the inode structure. > > > > Has this actually fixed/worked around the issue? > > Spock wrote this earlier to me directly. I believe I can quote it here: > > "Patch applied, looks good so far. System behaves like it was with > pre-4.18.15 kernels. > Also tried to add some user-level tests to the geneic background activity, like > - stat'ing a bunch of files > - streamed read several large files at once on ext4 and XFS > - random reads on the whole collection with a read size of 16K > > I will be monitoring while fragmentation stacks up and report back if > something bad happens." > > Spock, please let me know if you have any new results. > > Thanks! Hello, I'd say the patch fixed the problem, at least with my workload MemTotal: 8164968 kB MemFree: 135852 kB MemAvailable: 6406088 kB Buffers: 11988 kB Cached: 6414124 kB SwapCached: 0 kB Active: 1491952 kB Inactive: 5989576 kB Active(anon): 542512 kB Inactive(anon): 523780 kB Active(file): 949440 kB Inactive(file): 5465796 kB Unevictable: 8872 kB Mlocked: 8872 kB SwapTotal: 4194300 kB SwapFree: 4194300 kB Dirty: 128 kB Writeback: 0 kB AnonPages: 1064232 kB Mapped: 32348 kB Shmem: 3952 kB Slab: 205108 kB SReclaimable: 148792 kB SUnreclaim: 56316 kB KernelStack: 3984 kB PageTables: 11100 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8276784 kB Committed_AS: 1944792 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB AnonHugePages: 6144 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 271872 kB DirectMap2M: 8116224 kB