Re: [PATCH] fs, mm: account filp and names caches to kmemcg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 10, 2017 at 11:14:30AM +0200, Michal Hocko wrote:
> On Mon 09-10-17 16:26:13, Johannes Weiner wrote:
> > It's consistent in the sense that only page faults enable the memcg
> > OOM killer. It's not the type of memory that decides, it's whether the
> > allocation context has a channel to communicate an error to userspace.
> > 
> > Whether userspace is able to handle -ENOMEM from syscalls was a voiced
> > concern at the time this patch was merged, although there haven't been
> > any reports so far,
> 
> Well, I remember reports about MAP_POPULATE breaking or at least having
> an unexpected behavior.

Hm, that slipped past me. Did we do something about these? Or did they
fix userspace?

> Well, we should be able to do that with the oom_reaper. At least for v2
> which doesn't have synchronous userspace oom killing.

I don't see how the OOM reaper is a guarantee as long as we have this:

	if (!down_read_trylock(&mm->mmap_sem)) {
		ret = false;
		trace_skip_task_reaping(tsk->pid);
		goto unlock_oom;
	}

What do you mean by 'v2'?

> > > c) Overcharge kmem to oom memcg and queue an async memcg limit checker,
> > >    which will oom kill if needed.
> > 
> > This makes the most sense to me. Architecturally, I imagine this would
> > look like b), with an OOM handler at the point of return to userspace,
> > except that we'd overcharge instead of retrying the syscall.
> 
> I do not think we should break the hard limit semantic if possible. We
> can currently allow that for allocations which are very short term (oom
> victims) or too important to fail but allowing that for kmem charges in
> general sounds like too easy to runaway.

I'm not sure there is a convenient way out of this.

If we want to respect the hard limit AND guarantee allocation success,
the OOM killer has to free memory reliably - which it doesn't. But if
it did, we could also break the limit temporarily and have the OOM
killer replenish the pool before that userspace app can continue. The
allocation wouldn't have to be short-lived, since memory is fungible.

Until the OOM killer is 100% reliable, we have the choice between
sometimes deadlocking the cgroup tasks and everything that interacts
with them, returning -ENOMEM for syscalls, or breaking the hard limit
guarantee during memcg OOM.

It seems breaking the limit temporarily in order to reclaim memory is
the best option. There is kernel memory we don't account to the memcg
already because we think it's probably not going to be significant, so
the isolation isn't 100% watertight in the first place. And I'd rather
have the worst-case effect of a cgroup OOMing be spilling over its
hard limit than deadlocking things inside and outside the cgroup.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux