On Mon, May 21, 2018 at 12:17:07PM +0300, Kirill Tkhai wrote: > >> +static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > >> + struct mem_cgroup *memcg, int priority) > >> +{ > >> + struct memcg_shrinker_map *map; > >> + unsigned long freed = 0; > >> + int ret, i; > >> + > >> + if (!memcg_kmem_enabled() || !mem_cgroup_online(memcg)) > >> + return 0; > >> + > >> + if (!down_read_trylock(&shrinker_rwsem)) > >> + return 0; > >> + > >> + /* > >> + * 1) Caller passes only alive memcg, so map can't be NULL. > >> + * 2) shrinker_rwsem protects from maps expanding. > >> + */ > >> + map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map, > >> + true); > >> + BUG_ON(!map); > >> + > >> + for_each_set_bit(i, map->map, memcg_shrinker_nr_max) { > >> + struct shrink_control sc = { > >> + .gfp_mask = gfp_mask, > >> + .nid = nid, > >> + .memcg = memcg, > >> + }; > >> + struct shrinker *shrinker; > >> + > >> + shrinker = idr_find(&shrinker_idr, i); > >> + if (unlikely(!shrinker)) { > > > > Nit: I don't think 'unlikely' is required here as this is definitely not > > a hot path. > > In case of big machines with many containers and overcommit, shrink_slab() > in general is very hot path. See the patchset description. There are configurations, > when only shrink_slab() is executing and occupies cpu for 100%, it's the reason > of this patchset is made for. > > Here is the place we are absolutely sure shrinker is NULL in case if race with parallel > registering, so I don't see anything wrong to give compiler some information about branch > prediction. OK. If you're confident this 'unlikely' is useful, let's leave it as is.