On Mon, Jun 22, 2020 at 6:58 PM Roman Gushchin <guro@xxxxxx> wrote: > > Switch to per-object accounting of non-root slab objects. > > Charging is performed using obj_cgroup API in the pre_alloc hook. > Obj_cgroup is charged with the size of the object and the size of > metadata: as now it's the size of an obj_cgroup pointer. If the amount of > memory has been charged successfully, the actual allocation code is > executed. Otherwise, -ENOMEM is returned. > > In the post_alloc hook if the actual allocation succeeded, corresponding > vmstats are bumped and the obj_cgroup pointer is saved. Otherwise, the > charge is canceled. > > On the free path obj_cgroup pointer is obtained and used to uncharge the > size of the releasing object. > > Memcg and lruvec counters are now representing only memory used by active > slab objects and do not include the free space. The free space is shared > and doesn't belong to any specific cgroup. > > Global per-node slab vmstats are still modified from > (un)charge_slab_page() functions. The idea is to keep all slab pages > accounted as slab pages on system level. > > Signed-off-by: Roman Gushchin <guro@xxxxxx> > Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> > --- [snip] > +static inline struct kmem_cache *memcg_slab_pre_alloc_hook(struct kmem_cache *s, > + struct obj_cgroup **objcgp, > + size_t objects, gfp_t flags) > +{ > + struct kmem_cache *cachep; > + > + cachep = memcg_kmem_get_cache(s, objcgp); > + if (is_root_cache(cachep)) > + return s; > + > + if (obj_cgroup_charge(*objcgp, flags, objects * obj_full_size(s))) { > + memcg_kmem_put_cache(cachep); I think you forgot to put obj_cgroup_put(*objcgp) here again. > + cachep = NULL; > + } > + > + return cachep; > +} > + After the above fix: Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx>