On Fri, Oct 25, 2019 at 08:00:32PM +0000, Roman Gushchin wrote: > On Fri, Oct 25, 2019 at 03:41:18PM -0400, Johannes Weiner wrote: > > On Thu, Oct 17, 2019 at 05:28:13PM -0700, Roman Gushchin wrote: > > > +static inline struct kmem_cache *memcg_slab_pre_alloc_hook(struct kmem_cache *s, > > > + struct mem_cgroup **memcgp, > > > + size_t size, gfp_t flags) > > > +{ > > > + struct kmem_cache *cachep; > > > + > > > + cachep = memcg_kmem_get_cache(s, memcgp); > > > + if (is_root_cache(cachep)) > > > + return s; > > > + > > > + if (__memcg_kmem_charge_subpage(*memcgp, size * s->size, flags)) { > > > + mem_cgroup_put(*memcgp); > > > + memcg_kmem_put_cache(cachep); > > > + cachep = NULL; > > > + } > > > + > > > + return cachep; > > > +} > > > + > > > static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, > > > struct mem_cgroup *memcg, > > > size_t size, void **p) > > > { > > > struct mem_cgroup_ptr *memcg_ptr; > > > + struct lruvec *lruvec; > > > struct page *page; > > > unsigned long off; > > > size_t i; > > > @@ -439,6 +393,11 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, > > > off = obj_to_index(s, page, p[i]); > > > mem_cgroup_ptr_get(memcg_ptr); > > > page->mem_cgroup_vec[off] = memcg_ptr; > > > + lruvec = mem_cgroup_lruvec(page_pgdat(page), memcg); > > > + mod_lruvec_memcg_state(lruvec, cache_vmstat_idx(s), > > > + s->size); > > > + } else { > > > + __memcg_kmem_uncharge_subpage(memcg, s->size); > > > } > > > } > > > mem_cgroup_ptr_put(memcg_ptr); > > > > The memcg_ptr as a collection vessel for object references makes a lot > > of sense. But this code showcases that it should be a first-class > > memory tracking API that the allocator interacts with, rather than > > having to deal with a combination of memcg_ptr and memcg. > > > > In the two hunks here, on one hand we charge bytes to the memcg > > object, and then handle all the refcounting through a different > > bucketing object. To support that in the first place, we have to > > overload the memcg API all the way down to try_charge() to support > > bytes and pages. This is difficult to follow throughout all layers. > > > > What would be better is for this to be an abstraction layer for a > > subpage object tracker that sits on top of the memcg page tracker - > > not unlike the page allocator and the slab allocators themselves. > > > > And then the slab allocator would only interact with the subpage > > object tracker, and the object tracker would deal with the memcg page > > tracker under the hood. > > Yes, the idea makes total sense to me. I'm not sure I like the new naming > (I have to spend some time with it first), but the idea of moving > stocks and leftovers to the memcg_ptr/obj_cgroup level is really good. I'm not set on the naming, it was just to illustrate the structuring. I picked something that has cgroup in it, is not easily confused with the memcg API, and shortens nicely to local variable names (obj_cgroup -> objcg), but I'm all for a better name. > I'll include something based on your proposal into the next version > of the patchset. Thanks, looking forward to it.