Re: [RFC] Mechanism to induce memory reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 7, 2022 at 12:50 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> On Sun, Mar 06, 2022 at 03:11:23PM -0800, David Rientjes wrote:
> > Hi everybody,
> >
> > We'd like to discuss formalizing a mechanism to induce memory reclaim by
> > the kernel.
> >
> > The current multigenerational LRU proposal introduces a debugfs
> > mechanism[1] for this.  The "TMO: Transparent Memory Offloading in
> > Datacenters" paper also discusses a per-memcg mechanism[2].  While the
> > former can be used for debugging of MGLRU, both can quite powerfully be
> > used for proactive reclaim.
> >
> > Google's datacenters use a similar per-memcg mechanism for the same
> > purpose.  Thus, formalizing the mechanism would allow our userspace to use
> > an upstream supported interface that will be stable and consistent.
> >
> > This could be an incremental addition to MGLRU's lru_gen debugfs mechanism
> > but, since the concept has no direct dependency on the work, we believe it
> > is useful independent of the reclaim mechanism in use (both with and
> > without CONFIG_LRU_GEN).
> >
> > Idea: introduce a per-node sysfs mechanism for inducing memory reclaim
> > that can be useful for global (non-memcg constrained) reclaim and possible
> > even if memcg is not enabled in the kernel or mounted.  This could
> > optionally take a memcg id to induce reclaim for a memcg hierarchy.
> >
> > IOW, this would be a /sys/devices/system/node/nodeN/reclaim mechanim for
> > each NUMA node N on the system.  (It would be similar to the existing
> > per-node sysfs "compact" mechanism used to trigger compaction from
> > userspace.)
>
> I generally think a proactive reclaim interface is a good idea.

It is great to hear this.

> A per-cgroup control knob would make more sense to me, as cgroupfs
> takes care of delegation, namespacing etc. and so would permit
> self-directed proactive reclaim inside containers.

A per-cgroup control works perfectly for Google's data center use case
as well.  But a sysfs interface, such as /sys/kernel/mm/reclaim, that
takes a node mask and a memcg id as the arguments can be used by
proactive reclaimers on systems that don't use memcg (e.g. some
desktop Linux distros) as well, which is more general.  A special
value for memcg id indicating global reclaim can be passed to support
non-memcg use cases.

> > Userspace would write the following to this file:
> >  - nr_to_reclaim pages
>
> This makes sense, although (and you hinted at this below), I'm
> thinking it should be in bytes, especially if part of cgroupfs.
>
> >  - swappiness factor
>
> This I'm not sure about.
>
> Mostly because I'm not sure about swappiness in general. It balances
> between anon and file, but both of them are aged according to the same
> LRU rules. The only reason to prefer one over the other seems to be
> when the cost of reloading one (refault vs swapin) isn't the same as
> the other. That's usually a hardware property, which in a perfect
> world we'd auto-tune inside the kernel based on observed IO
> performance. Not sure why you'd want this per reclaim request.

The choice between anon and file pages is not only a hardware
property, but also a matter of policy decisions. It is useful to allow
the userspace policy daemon the flexibility to choose anon pages or
file pages or both to reclaim from, for the exact reasons that you
have described.  This is important for the use cases in Google (where
anon pages are the primary focus of proactive reclaim).

Maybe instead of the swappiness factor, we can replace this parameter
with a page type mask to more explicitly select which types of pages
to reclaim.

> >  - flags to specify context, if any[**]
> >
> >  [**] this is offered for extensibility to specify the context in which
> >       reclaim is being done (clean file pages only, demotion for memory
> >       tiering vs eviction, etc), otherwise 0
>
> This one is curious. I don't understand the use cases for either of
> these examples, and I can't think of other flags a user may pass on a
> per-invocation basis. Would you care to elaborate some?

One of the flag examples is to control whether the requested proactive
reclaim can induce I/Os. This can be especially useful for memory
tiering to lower cost memory devices, where I/Os would likely not be
preferred for reclaim-based demotion requested proactively.

Wei




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux