Re: [RFD RESEND] cgroup: Persistent memory usage tracking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, Aug 22, 2022 at 08:01:41PM -0700, Roman Gushchin wrote:
> > > >    One solution that I can think of is leveraging the resource domain
> > > >    concept which is currently only used for threaded cgroups. All memory
> > > >    usages of threaded cgroups are charged to their resource domain cgroup
> > > >    which hosts the processes for those threads. The persistent usages have a
> > > >    similar pattern, so maybe the service level cgroup can declare that it's
> > > >    the encompassing resource domain and the instance cgroup can say whether
> > > >    it's gonna charge e.g. the tmpfs instance to its own or the encompassing
> > > >    resource domain.
> > > >
> > > 
> > > I think this sounds excellent and addresses our use cases. Basically
> > > the tmpfs/bpf memory would get charged to the encompassing resource
> > > domain cgroup rather than the instance cgroup, making the memory usage
> > > of the first and second+ instances consistent and predictable.
> > > 
> > > Would love to hear from other memcg folks what they would think of
> > > such an approach. I would also love to hear what kind of interface you
> > > have in mind. Perhaps a cgroup tunable that says whether it's going to
> > > charge the tmpfs/bpf instance to itself or to the encompassing
> > > resource domain?
> > 
> > I like this too. It makes shared charging predictable, with a coherent
> > resource hierarchy (congruent OOM, CPU, IO domains), and without the
> > need for cgroup paths in tmpfs mounts or similar.
> > 
> > As far as who is declaring what goes, though: if the instance groups
> > can declare arbitrary files/objects persistent or shared, they'd be
> > able to abuse this and sneak private memory past local limits and
> > burden the wider persistent/shared domain with it.

My thought was that the persistent cgroup and instance cgroups should belong
to the same trust domain and system level control should be applied at the
resource domain level. The application may decide to shift between
persistent and per-instance however it wants to and may even configure
resource control at that level but all that's for its own accounting
accuracy and benefit.

> > I'm thinking it might make more sense for the service level to declare
> > which objects are persistent and shared across instances.
> 
> I like this idea.
> 
> > If that's the case, we may not need a two-component interface. Just
> > the ability for an intermediate cgroup to say: "This object's future
> > memory is to be charged to me, not the instantiating cgroup."
> > 
> > Can we require a process in the intermediate cgroup to set up the file
> > or object, and use madvise/fadvise to say "charge me", before any
> > instances are launched?
> 
> We need to think how to make this interface convenient to use.
> First, these persistent resources are likely created by some agent software,
> not the main workload. So the requirement to call madvise() from the
> actual cgroup might be not easily achievable.

So one worry that I have for this is that it requires the application itself
to be aware of cgroup topolgies and restructure itself so that allocation of
those resources are factored out into something else. Maybe that's not a
huge problem but it may limit its applicability quite a bit.

If we can express all the resource contraints and structures in the cgroup
side and configured by the management agent, the application can simply e.g.
madvise whatever memory region or flag bpf maps as "these are persistent"
and the rest can be handled by the system. If the agent set up the
environment for that, it gets accounted accordingly; otherwise, it'd behave
as if those tagging didn't exist. Asking the application to set up all its
resources in separate steps, that might require significant restructuring
and knowledge of how the hierarchy is setup in many cases.

> So _maybe_ something like writing a fd into cgroup.memory.resources.
> 
> Second, it would be really useful to export the current configuration
> to userspace. E.g. a user should be able to query to which cgroup the given
> bpf map "belongs" and which bpf maps belong to the given cgroups. Otherwise
> it will create a problem for userspace programs which manage cgroups
> (e.g. systemd): they should be able to restore the current configuration
> from the kernel state, without "remembering" what has been configured
> before.

This too can be achieved by separating out cgroup setup and tagging specific
resources. Agent and cgroup know what each cgroup is supposed to do as they
already do now and each resource is tagged whether they're persistent or
not, so everything is always known without the agent and the application
having to explicitly share the information.

Thanks.

-- 
tejun



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux