Re: [PATCH] fs, mm: account filp and names caches to kmemcg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 09-10-17 16:26:13, Johannes Weiner wrote:
> On Mon, Oct 09, 2017 at 10:52:44AM -0700, Greg Thelen wrote:
> > Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > 
> > > On Fri 06-10-17 12:33:03, Shakeel Butt wrote:
> > >> >>       names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
> > >> >> -                     SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
> > >> >> +                     SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL);
> > >> >
> > >> > I might be wrong but isn't name cache only holding temporary objects
> > >> > used for path resolution which are not stored anywhere?
> > >> >
> > >> 
> > >> Even though they're temporary, many containers can together use a
> > >> significant amount of transient uncharged memory. We've seen machines
> > >> with 100s of MiBs in names_cache.
> > >
> > > Yes that might be possible but are we prepared for random ENOMEM from
> > > vfs calls which need to allocate a temporary name?
> > >
> > >> 
> > >> >>       filp_cachep = kmem_cache_create("filp", sizeof(struct file), 0,
> > >> >> -                     SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
> > >> >> +                     SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT, NULL);
> > >> >>       percpu_counter_init(&nr_files, 0, GFP_KERNEL);
> > >> >>  }
> > >> >
> > >> > Don't we have a limit for the maximum number of open files?
> > >> >
> > >> 
> > >> Yes, there is a system limit of maximum number of open files. However
> > >> this limit is shared between different users on the system and one
> > >> user can hog this resource. To cater that, we set the maximum limit
> > >> very high and let the memory limit of each user limit the number of
> > >> files they can open.
> > >
> > > Similarly here. Are all syscalls allocating a fd prepared to return
> > > ENOMEM?
> > >
> > > -- 
> > > Michal Hocko
> > > SUSE Labs
> > 
> > Even before this patch I find memcg oom handling inconsistent.  Page
> > cache pages trigger oom killer and may allow caller to succeed once the
> > kernel retries.  But kmem allocations don't call oom killer.
> 
> It's consistent in the sense that only page faults enable the memcg
> OOM killer. It's not the type of memory that decides, it's whether the
> allocation context has a channel to communicate an error to userspace.
> 
> Whether userspace is able to handle -ENOMEM from syscalls was a voiced
> concern at the time this patch was merged, although there haven't been
> any reports so far,

Well, I remember reports about MAP_POPULATE breaking or at least having
an unexpected behavior.

> and it seemed like the lesser evil between that
> and deadlocking the kernel.

agreed on this part though

> If we could find a way to invoke the OOM killer safely, I would
> welcome such patches.

Well, we should be able to do that with the oom_reaper. At least for v2
which doesn't have synchronous userspace oom killing.

[...]

> > c) Overcharge kmem to oom memcg and queue an async memcg limit checker,
> >    which will oom kill if needed.
> 
> This makes the most sense to me. Architecturally, I imagine this would
> look like b), with an OOM handler at the point of return to userspace,
> except that we'd overcharge instead of retrying the syscall.

I do not think we should break the hard limit semantic if possible. We
can currently allow that for allocations which are very short term (oom
victims) or too important to fail but allowing that for kmem charges in
general sounds like too easy to runaway.

-- 
Michal Hocko
SUSE Labs



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux