Re: [PATCH 2/3] memcg: ratify and consolidate over-charge handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Michal.

On Mon, Sep 14, 2015 at 09:32:25PM +0200, Michal Hocko wrote:
> >   mem_cgroup_try_charge() needs to switch
> >   the returned cgroup to the root one.
> > 
> > The reality is that in memcg there are cases where we are forced
> > and/or willing to go over the limit.  Each such case needs to be
> > scrutinized and justified but there definitely are situations where
> > that is the right thing to do.  We alredy do this but with a
> > superficial and inconsistent disguise which leads to unnecessary
> > complications.
> >
> > This patch updates try_charge() so that it over-charges and returns 0
> > when deemed necessary.  -EINTR return is removed along with all
> > special case handling in the callers.
> 
> OK the code is easier in the end, although I would argue that try_charge
> could return ENOMEM for GFP_NOWAIT instead of overcharging (this would
> e.g. allow precharge to bail out earlier). Something for a separate patch I
> guess.

Hmm... GFP_NOWAIT is failed unless it also has __GFP_NOFAIL.

> Anyway I still do not like usage > max/hard limit presented to userspace
> because it looks like a clear breaking of max/hard limit semantic. I
> realize that we cannot solve the underlying problem easily or it might
> be unfeasible but we should consider how to present this state to the
> userspace.
> We have basically 2 options AFAICS. We can either document that a
> _temporal_ breach of the max/hard limit is allowed or we can hide this
> fact and always present max(current,max).
> The first one might be better for an easier debugging and it is also
> more honest about the current state but the definition of the hard limit
> is a bit weird. It also exposes implementation details to the userspace.
> The other choice is clearly lying but users shouldn't care about the
> implementation details and if the state is really temporal then the
> userspace shouldn't even notice. There is also a risk that somebody is
> already depending on current < max which happened to work without kmem
> until now.
> This is something to be solved in a separate patch I guess but we
> should think about that. I am not entirely clear on that myself but I am
> more inclined to the first option and simply document the potential
> corner case and temporal breach.

I'm pretty sure we don't wanna lie.  Just document that temporal
small-scale breaches may happen.  I don't even think this is an
implementation detail.  The fact that we have separate high and max
limits is already admitting that this is inherently different from
global case and that memcg is consciously and actively making
trade-offs regarding handling of global and local memory pressure and
I think that's the right thing to do and something inherent to what
memcg is doing here.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]