Re: [PATCH -mm 0/8] memcg: reparent kmem on css offline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Johannes,

On Mon, Jul 07, 2014 at 10:25:06AM -0400, Johannes Weiner wrote:
> Hi Vladimir,
> 
> On Mon, Jul 07, 2014 at 04:00:05PM +0400, Vladimir Davydov wrote:
> > Hi,
> > 
> > This patch set introduces re-parenting of kmem charges on memcg css
> > offline. The idea lying behind it is very simple - instead of pointing
> > from kmem objects (kmem caches, non-slab kmem pages) directly to the
> > memcg which they are charged against, we make them point to a proxy
> > object, mem_cgroup_kmem_context, which, in turn, points to the memcg
> > which it belongs to. As a result on memcg offline, it's enough to only
> > re-parent the memcg's mem_cgroup_kmem_context.
> 
> The motivation for this was to clear out all references to a memcg by
> the time it's offlined, so that the unreachable css can be freed soon.
> 
> However, recent cgroup core changes further disconnected the css from
> the cgroup object itself, so it's no longer as urgent to free the css.
> 
> In addition, Tejun made offlined css iterable and split css_tryget()
> and css_tryget_online(), which would allow memcg to pin the css until
> the last charge is gone while continuing to iterate and reclaim it on
> hierarchical pressure, even after it was offlined.
> 
> This would obviate the need for reparenting as a whole, not just kmem
> pages, but even remaining page cache.  Michal already obsoleted the
> force_empty knob that reparents as a fallback, and whether the cache
> pages are in the parent or in a ghost css after cgroup deletion does
> not make a real difference from a user point of view, they still get
> reclaimed when the parent experiences pressure.

So, that means there's no need in a proxy object between kmem objects
and the memcg which they are charged against (mem_cgroup_kmem_context in
this patch set), because now it's OK to pin css from kmem allocations.
Furthermore there will be no need to reparent per memcg list_lrus when
they are introduced. That's nice!

> You could then reap dead slab caches as part of the regular per-memcg
> slab scanning in reclaim, without having to resort to auxiliary lists,
> vmpressure events etc.

Do you mean adding a per memcg shrinker that will call kmem_cache_shrink
for all memcg caches on memcg/global pressure?

Actually I recently made dead caches self-destructive at the cost of
slowing down kfrees to dead caches (see
https://www.lwn.net/Articles/602330/, it's already in the mmotm tree) so
no dead cache reaping is necessary. Do you think if we need it now?

> I think it would save us a lot of code and complexity.  You want
> per-memcg slab scanning *anyway*, all we'd have to change in the
> existing code would be to pin the css until the LRUs and kmem caches
> are truly empty, and switch mem_cgroup_iter() to css_tryget().
> 
> Would this make sense to you?

Hmm, interesting. Thank you for such a thorough explanation.

One question. Do we still need to free mem_cgroup->kmemcg_id on css
offline so that it can be reused by new kmem-active cgroups (currently
we don't)?

If we won't free it the root_cache->memcg_params->memcg_arrays may
become really huge due to lots of dead css holding the id.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]