Re: [PATCH 0/3] memcg: Slow down swap allocation as the available space gets depleted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 22, 2020 at 05:43:18PM +0200, Michal Hocko wrote:
> On Wed 22-04-20 10:15:14, Johannes Weiner wrote:
> > On Wed, Apr 22, 2020 at 03:26:32PM +0200, Michal Hocko wrote:
> > > That being said I believe our discussion is missing an important part.
> > > There is no description of the swap.high semantic. What can user expect
> > > when using it?
> > 
> > Good point, we should include that in cgroup-v2.rst. How about this?
> > 
> > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > index bcc80269bb6a..49e8733a9d8a 100644
> > --- a/Documentation/admin-guide/cgroup-v2.rst
> > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > @@ -1370,6 +1370,17 @@ PAGE_SIZE multiple when read back.
> >  	The total amount of swap currently being used by the cgroup
> >  	and its descendants.
> >  
> > +  memory.swap.high
> > +	A read-write single value file which exists on non-root
> > +	cgroups.  The default is "max".
> > +
> > +	Swap usage throttle limit.  If a cgroup's swap usage exceeds
> > +	this limit, allocations inside the cgroup will be throttled.
> 
> Hm, so this doesn't talk about which allocatios are affected. This is
> good for potential future changes but I am not sure this is useful to
> make any educated guess about the actual effects. One could expect that
> only those allocations which could contribute to future memory.swap
> usage. I fully realize that we do not want to be very specific but we
> want to provide something useful I believe. I am sorry but I do not have
> a good suggestion on how to make this better. Mostly because I still
> struggle on how this should behave to be sane.

I honestly don't really follow you here. Why is it not helpful to say
all allocations will slow down when condition X is met? We do the same
for memory.high.

> I am also missing some information about what the user can actually do
> about this situation and call out explicitly that the throttling is
> not going away until the swap usage is shrunk and the kernel is not
> capable of doing that on its own without a help from the userspace. This
> is really different from memory.high which has means to deal with the
> excess and shrink it down in most cases. The following would clarify it

I think we may be talking past each other. The user can do the same
thing as in any OOM situation: wait for the kill.

Swap being full is an OOM situation.

Yes, that does not match the kernel's internal definition of an OOM
situation. But we've already established that kernel OOM killing has a
different objective (memory deadlock avoidance) than userspace OOM
killing (quality of life)[1]

[1] https://lkml.org/lkml/2019/8/4/15

As Tejun said, things like earlyoom and oomd already kill based on
swap exhaustion, no further questions asked. Reclaim has been running
for a while, it went after all the low-hanging fruit: it doesn't swap
as long as there is easy cache; it also didn't just swap a little, it
filled up all of swap; and the pages in swap are all cold too, because
refaults would free that space again.

The workingset is hugely oversized for the available capacity, and
nobody has any interest in sticking around to see what tricks reclaim
still has up its sleeves (hint: nothing good). From here on out, it's
all thrashing and pain. The kernel might not OOM kill yet, but the quality
of life expectancy for a workload with full swap is trending toward zero.

We've been killing based on swap exhaustion as a stand-alone trigger
for several years now and it's never been the wrong call.

All swap.high does is acknowledge that swap-full is a common OOM
situation from a userspace view, and helps it handle that situation.

Just like memory.high acknowledges that if reclaim fails per kernel
definition, it's an OOM situation from a kernel view, and it helps
userspace handle that.

> for me
> 	"Once the limit is exceeded it is expected that the userspace
> 	 is going to act and either free up the swapped out space
> 	 or tune the limit based on needs. The kernel itself is not
> 	 able to do that on its own.
> 	"

I mean, in rare cases, maybe userspace can do some loadshedding and be
smart about it. But we certainly don't expect it to. Just like we
don't expect it to when memory.high starts injecting sleeps. We expect
the workload to die, usually.



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux