The patch titled Subject: mm/vmscan.c: clear shrinker bit if there are no objects related to memcg has been added to the -mm tree. Its filename is mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> Subject: mm/vmscan.c: clear shrinker bit if there are no objects related to memcg To avoid further unneed calls of do_shrink_slab() for shrinkers, which already do not have any charged objects in a memcg, their bits have to be cleared. This patch introduces a lockless mechanism to do that without races without parallel list lru add. After do_shrink_slab() returns SHRINK_EMPTY the first time, we clear the bit and call it once again. Then we restore the bit, if the new return value is different. Note, that single smp_mb__after_atomic() in shrink_slab_memcg() covers two situations: 1)list_lru_add() shrink_slab_memcg list_add_tail() for_each_set_bit() <--- read bit do_shrink_slab() <--- missed list update (no barrier) <MB> <MB> set_bit() do_shrink_slab() <--- seen list update This situation, when the first do_shrink_slab() sees set bit, but it doesn't see list update (i.e., race with the first element queueing), is rare. So we don't add <MB> before the first call of do_shrink_slab() instead of this to do not slow down generic case. Also, it's need the second call as seen in below in (2). 2)list_lru_add() shrink_slab_memcg() list_add_tail() ... set_bit() ... ... for_each_set_bit() do_shrink_slab() do_shrink_slab() clear_bit() ... ... ... list_lru_add() ... list_add_tail() clear_bit() <MB> <MB> set_bit() do_shrink_slab() The barriers guarantees, the second do_shrink_slab() in the right side task sees list update if really cleared the bit. This case is drawn in the code comment. [Results/performance of the patchset] After the whole patchset applied the below test shows signify increase of performance: $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy $mkdir /sys/fs/cgroup/memory/ct $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > /sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i s/$i; touch s/$i/file; done Then, 5 sequential calls of drop caches: $time echo 3 > /proc/sys/vm/drop_caches 1)Before: 0.00user 13.78system 0:13.78elapsed 99%CPU 0.00user 5.59system 0:05.60elapsed 99%CPU 0.00user 5.48system 0:05.48elapsed 99%CPU 0.00user 8.35system 0:08.35elapsed 99%CPU 0.00user 8.34system 0:08.35elapsed 99%CPU 2)After 0.00user 1.10system 0:01.10elapsed 99%CPU 0.00user 0.00system 0:00.01elapsed 64%CPU 0.00user 0.01system 0:00.01elapsed 82%CPU 0.00user 0.00system 0:00.01elapsed 64%CPU 0.00user 0.01system 0:00.01elapsed 82%CPU The results show the performance increases at least in 548 times. Shakeel Butt tested this patchset with fork-bomb on his configuration: > I created 255 memcgs, 255 ext4 mounts and made each memcg create a > file containing few KiBs on corresponding mount. Then in a separate > memcg of 200 MiB limit ran a fork-bomb. > > I ran the "perf record -ag -- sleep 60" and below are the results: > > Without the patch series: > Samples: 4M of event 'cycles', Event count (approx.): 3279403076005 > + 36.40% fb.sh [kernel.kallsyms] [k] shrink_slab > + 18.97% fb.sh [kernel.kallsyms] [k] list_lru_count_one > + 6.75% fb.sh [kernel.kallsyms] [k] super_cache_count > + 0.49% fb.sh [kernel.kallsyms] [k] down_read_trylock > + 0.44% fb.sh [kernel.kallsyms] [k] mem_cgroup_iter > + 0.27% fb.sh [kernel.kallsyms] [k] up_read > + 0.21% fb.sh [kernel.kallsyms] [k] osq_lock > + 0.13% fb.sh [kernel.kallsyms] [k] shmem_unused_huge_count > + 0.08% fb.sh [kernel.kallsyms] [k] shrink_node_memcg > + 0.08% fb.sh [kernel.kallsyms] [k] shrink_node > > With the patch series: > Samples: 4M of event 'cycles', Event count (approx.): 2756866824946 > + 47.49% fb.sh [kernel.kallsyms] [k] down_read_trylock > + 30.72% fb.sh [kernel.kallsyms] [k] up_read > + 9.51% fb.sh [kernel.kallsyms] [k] mem_cgroup_iter > + 1.69% fb.sh [kernel.kallsyms] [k] shrink_node_memcg > + 1.35% fb.sh [kernel.kallsyms] [k] mem_cgroup_protected > + 1.05% fb.sh [kernel.kallsyms] [k] queued_spin_lock_slowpath > + 0.85% fb.sh [kernel.kallsyms] [k] _raw_spin_lock > + 0.78% fb.sh [kernel.kallsyms] [k] lruvec_lru_size > + 0.57% fb.sh [kernel.kallsyms] [k] shrink_node > + 0.54% fb.sh [kernel.kallsyms] [k] queue_work_on > + 0.46% fb.sh [kernel.kallsyms] [k] shrink_slab_memcg Link: http://lkml.kernel.org/r/153063070859.1818.11870882950920963480.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> Acked-by: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Tested-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Guenter Roeck <linux@xxxxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Josef Bacik <jbacik@xxxxxx> Cc: Li RongQing <lirongqing@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Matthias Kaehlcke <mka@xxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Philippe Ombredanne <pombredanne@xxxxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Sahitya Tummala <stummala@xxxxxxxxxxxxxx> Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> Cc: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Waiman Long <longman@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/memcontrol.h | 2 ++ mm/vmscan.c | 25 +++++++++++++++++++++++-- 2 files changed, 25 insertions(+), 2 deletions(-) diff -puN include/linux/memcontrol.h~mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg include/linux/memcontrol.h --- a/include/linux/memcontrol.h~mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg +++ a/include/linux/memcontrol.h @@ -1281,6 +1281,8 @@ static inline void memcg_set_shrinker_bi rcu_read_lock(); map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + /* Pairs with smp mb in shrink_slab() */ + smp_mb__before_atomic(); set_bit(shrinker_id, map->map); rcu_read_unlock(); } diff -puN mm/vmscan.c~mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg mm/vmscan.c --- a/mm/vmscan.c~mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg +++ a/mm/vmscan.c @@ -597,8 +597,29 @@ static unsigned long shrink_slab_memcg(g continue; ret = do_shrink_slab(&sc, shrinker, priority); - if (ret == SHRINK_EMPTY) - ret = 0; + if (ret == SHRINK_EMPTY) { + clear_bit(i, map->map); + /* + * After the shrinker reported that it had no objects to free, + * but before we cleared the corresponding bit in the memcg + * shrinker map, a new object might have been added. To make + * sure, we have the bit set in this case, we invoke the + * shrinker one more time and re-set the bit if it reports that + * it is not empty anymore. The memory barrier here pairs with + * the barrier in memcg_set_shrinker_bit(): + * + * list_lru_add() shrink_slab_memcg() + * list_add_tail() clear_bit() + * <MB> <MB> + * set_bit() do_shrink_slab() + */ + smp_mb__after_atomic(); + ret = do_shrink_slab(&sc, shrinker, priority); + if (ret == SHRINK_EMPTY) + ret = 0; + else + memcg_set_shrinker_bit(memcg, nid, i); + } freed += ret; if (rwsem_is_contended(&shrinker_rwsem)) { _ Patches currently in -mm which might be from ktkhai@xxxxxxxxxxxxx are memcg-remove-memcg_cgroup-id-from-idr-on-mem_cgroup_css_alloc-failure.patch list_lru-combine-code-under-the-same-define.patch mm-introduce-config_memcg_kmem-as-combination-of-config_memcg-config_slob.patch mm-assign-id-to-every-memcg-aware-shrinker.patch memcg-move-up-for_each_mem_cgroup-_tree-defines.patch mm-assign-memcg-aware-shrinkers-bitmap-to-memcg.patch mm-refactoring-in-workingset_init.patch fs-refactoring-in-alloc_super.patch fs-propagate-shrinker-id-to-list_lru.patch list_lru-add-memcg-argument-to-list_lru_from_kmem.patch list_lru-pass-dst_memcg-argument-to-memcg_drain_list_lru_node.patch list_lru-pass-lru-argument-to-memcg_drain_list_lru_node.patch mm-export-mem_cgroup_is_root.patch mm-set-bit-in-memcg-shrinker-bitmap-on-first-list_lru-item-apearance.patch mm-iterate-only-over-charged-shrinkers-during-memcg-shrink_slab.patch mm-add-shrink_empty-shrinker-methods-return-value.patch mm-clear-shrinker-bit-if-there-are-no-objects-related-to-memcg.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html