Re: Re: [PATCH] mm: memcg: Use larger chunks for proactive reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 2, 2024 at 2:15 AM Michal Koutný <mkoutny@xxxxxxxx> wrote:
>
> On Fri, Feb 02, 2024 at 01:02:47PM +0800, Efly Young <yangyifei03@xxxxxxxxxxxx> wrote:
> > > Looking at the code, I'm not quite sure if this can be read this
> > > literally. Efly might be able to elaborate, but we do a full loop of
> > > all nodes and cgroups in the tree before checking nr_to_reclaimed, and
> > > rely on priority level for granularity. So request size and complexity
> > > of the cgroup tree play a role. I don't know where the exact factor
> > > two would come from.
> >
> > I'm sorry that this conclusion may be arbitrary. It might just only suit
> > for my case. In my case, I traced it loop twice every time before checking
> > nr_reclaimed, and it reclaimed less than my request size(1G) every time.
> > So I think the upper bound is 2 * request. But now it seems that this is
> > related to cgroup tree I constucted and my system status and my request
> > size(a relatively large chunk). So there are many influencing factors,
> > a specific upper bound is not accurate.
>
> Alright, thanks for the background.
>
> > > IMO it's more accurate to phrase it like this:
> > >
> > > Reclaim tries to balance nr_to_reclaim fidelity with fairness across
> > > nodes and cgroups over which the pages are spread. As such, the bigger
> > > the request, the bigger the absolute overreclaim error. Historic
> > > in-kernel users of reclaim have used fixed, small request batches to
> > > approach an appropriate reclaim rate over time. When we reclaim a user
> > > request of arbitrary size, use decaying batches to manage error while
> > > maintaining reasonable throughput.
>
> Hm, decay...
> So shouldn't the formula be
>   nr_pages = delta <= SWAP_CLUSTER_MAX ? delta : (delta + 3*SWAP_CLUSTER_MAX) / 4
> where
>   delta = nr_to_reclaim - nr_reclaimed
> ?
> (So that convergence for smaller deltas is same like original- and other
> reclaims while conservative factor is applied for effectivity of higher
> user requests.)

Tapering out at 32 instead of 4 doesn't make much difference in
practice because of how far off the actually reclaimed amount can be
from the request size. We're talking thousands of pages of error for a
request size of a few megs, and hundreds of pages of error for
requests less than 100 pages.

So all of these should be more or less equivalent:
delta <= SWAP_CLUSTER_MAX ? delta : (delta + 3*SWAP_CLUSTER_MAX) / 4
max((nr_to_reclaim - nr_reclaimed) / 4, (nr_to_reclaim - nr_reclaimed) % 4)
(nr_to_reclaim - nr_reclaimed) / 4 + 4
(nr_to_reclaim - nr_reclaimed) / 4

I was just trying to avoid putting in a 0 for the request size with the mod.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux