On Thu, Mar 01, 2018 at 02:55:49PM -0800, Andrew Morton wrote: > On Thu, 1 Mar 2018 22:17:13 +0000 Roman Gushchin <guro@xxxxxx> wrote: > > > I was reported about suspicious growth of unreclaimable slabs > > on some machines. I've found that it happens on machines > > with low memory pressure, and these unreclaimable slabs > > are external names attached to dentries. > > > > External names are allocated using generic kmalloc() function, > > so they are accounted as unreclaimable. But they are held > > by dentries, which are reclaimable, and they will be reclaimed > > under the memory pressure. > > > > In particular, this breaks MemAvailable calculation, as it > > doesn't take unreclaimable slabs into account. > > This leads to a silly situation, when a machine is almost idle, > > has no memory pressure and therefore has a big dentry cache. > > And the resulting MemAvailable is too low to start a new workload. > > > > To resolve this issue, a new mm counter is introduced: > > NR_INDIRECTLY_RECLAIMABLE_BYTES . > > Since it's not possible to count such objects on per-page basis, > > let's make the unit obvious (by analogy to NR_KERNEL_STACK_KB). > > > > The counter is increased in dentry allocation path, if an external > > name structure is allocated; and it's decreased in dentry freeing > > path. I believe, that it's not the only case in the kernel, when > > we do have such indirectly reclaimable memory, so I expect more > > use cases to be added. > > > > This counter is used to adjust MemAvailable calculations: > > indirectly reclaimable memory is considered as available. > > > > To reproduce the problem I've used the following Python script: > > import os > > > > for iter in range (0, 10000000): > > try: > > name = ("/some_long_name_%d" % iter) + "_" * 220 > > os.stat(name) > > except Exception: > > pass > > > > Without this patch: > > $ cat /proc/meminfo | grep MemAvailable > > MemAvailable: 7811688 kB > > $ python indirect.py > > $ cat /proc/meminfo | grep MemAvailable > > MemAvailable: 2753052 kB > > > > With the patch: > > $ cat /proc/meminfo | grep MemAvailable > > MemAvailable: 7809516 kB > > $ python indirect.py > > $ cat /proc/meminfo | grep MemAvailable > > MemAvailable: 7749144 kB > > > > Also, this patch adds a corresponding entry to /proc/vmstat: > > > > $ cat /proc/vmstat | grep indirect > > nr_indirectly_reclaimable 5117499104 > > > > $ echo 2 > /proc/sys/vm/drop_caches > > > > $ cat /proc/vmstat | grep indirect > > nr_indirectly_reclaimable 7104 > > hm, I guess so... > > I wonder if it should be more general, as there are probably other > potential users of NR_INDIRECTLY_RECLAIMABLE_BYTES. And they might be > using alloc_pages() or even vmalloc()? Whereas > NR_INDIRECTLY_RECLAIMABLE_BYTES is pretty closely tied to kmalloc, at > least in the code comments. I don't see anything kmalloc-specific in the counter itself, except that it's in bytes (which is required). It can be perfectly used for any types of allocations, and I'm pretty sure there are other use cases. This is an RFC patch, so I merged everything into one patch to make easier to understand the problem and the proposed solution. Once we'll agree on approach, I'll probably split it into few parts: 1) introduction of the counter (and concept of indirectly reclaimable memory) 2) MemAvailable adjustment 3) using the counter from dcache allocation/freeing paths > > If we're really OK with the "only for kmalloc" concept then why create > NR_INDIRECTLY_RECLAIMABLE_BYTES at all? Could we just use > NR_SLAB_RECLAIMABLE to account the external names? After all, kmalloc > is slab. I've thought about this approach, but it's really hard to track reclaimable and unreclaimable objects in one slab cache, so the only option I see is to duplicate all kmalloc caches. IMO, it's a bit too heavy, but I'm not completely sure. Also, it's less powerful, as non-kmalloc allocations can't be tracked. Thank you! Roman