On Tue, 2020-02-11 at 12:55 -0500, Johannes Weiner wrote: > The VFS inode shrinker is currently allowed to reclaim inodes with > populated page cache. As a result it can drop gigabytes of hot and > active page cache on the floor without consulting the VM (recorded as > "inodesteal" events in /proc/vmstat). > > This causes real problems in practice. Consider for example how the > VM > would cache a source tree, such as the Linux git tree. As large parts > of the checked out files and the object database are accessed > repeatedly, the page cache holding this data gets moved to the active > list, where it's fully (and indefinitely) insulated from one-off > cache > moving through the inactive list. > This behavior of invalidating page cache from the inode shrinker goes > back to even before the git import of the kernel tree. It may have > been less noticeable when the VM itself didn't have real workingset > protection, and floods of one-off cache would push out any active > cache over time anyway. But the VM has come a long way since then and > the inode shrinker is now actively subverting its caching strategy. Two things come to mind when looking at this: - highmem - NUMA IIRC one of the reasons reclaim is done in this way is because a page cache page in one area of memory (highmem, or a NUMA node) can end up pinning inode slab memory in another memory area (normal zone, other NUMA node). I do not know how much of a concern that still is nowadays, but it seemed something worth bringing up. -- All Rights Reversed.
Attachment:
signature.asc
Description: This is a digitally signed message part