Re: [PATCH v3] mm: memcg: Use larger batches for proactive reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 05-02-24 12:26:10, T.J. Mercier wrote:
> On Mon, Feb 5, 2024 at 11:40 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> >
> > On Mon 05-02-24 11:29:49, T.J. Mercier wrote:
> > > On Mon, Feb 5, 2024 at 2:40 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > > >
> > > > On Fri 02-02-24 23:38:54, T.J. Mercier wrote:
> > > > > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
> > > > > reclaim") we passed the number of pages for the reclaim request directly
> > > > > to try_to_free_mem_cgroup_pages, which could lead to significant
> > > > > overreclaim. After 0388536ac291 the number of pages was limited to a
> > > > > maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim.
> > > > > However such a small batch size caused a regression in reclaim
> > > > > performance due to many more reclaim start/stop cycles inside
> > > > > memory_reclaim.
> > > >
> > > > You have mentioned that in one of the previous emails but it is good to
> > > > mention what is the source of that overhead for the future reference.
> > >
> > > I can add a sentence about the restart cost being amortized over more
> > > pages with a large batch size. It covers things like repeatedly
> > > flushing stats, walking the tree, evaluating protection limits, etc.
> > >
> > > > > Reclaim tries to balance nr_to_reclaim fidelity with fairness across
> > > > > nodes and cgroups over which the pages are spread. As such, the bigger
> > > > > the request, the bigger the absolute overreclaim error. Historic
> > > > > in-kernel users of reclaim have used fixed, small sized requests to
> > > > > approach an appropriate reclaim rate over time. When we reclaim a user
> > > > > request of arbitrary size, use decaying batch sizes to manage error while
> > > > > maintaining reasonable throughput.
> > > >
> > > > These numbers are with MGLRU or the default reclaim implementation?
> > >
> > > These numbers are for both. root uses the memcg LRU (MGLRU was
> > > enabled), and /uid_0 does not.
> >
> > Thanks it would be nice to outline that in the changelog.
> 
> Ok, I'll update the table below for each case.
> 
> > > > > root - full reclaim       pages/sec   time (sec)
> > > > > pre-0388536ac291      :    68047        10.46
> > > > > post-0388536ac291     :    13742        inf
> > > > > (reclaim-reclaimed)/4 :    67352        10.51
> > > > >
> > > > > /uid_0 - 1G reclaim       pages/sec   time (sec)  overreclaim (MiB)
> > > > > pre-0388536ac291      :    258822       1.12            107.8
> > > > > post-0388536ac291     :    105174       2.49            3.5
> > > > > (reclaim-reclaimed)/4 :    233396       1.12            -7.4
> > > > >
> > > > > /uid_0 - full reclaim     pages/sec   time (sec)
> > > > > pre-0388536ac291      :    72334        7.09
> > > > > post-0388536ac291     :    38105        14.45
> > > > > (reclaim-reclaimed)/4 :    72914        6.96
> > > > >
> > > > > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
> > > > > Signed-off-by: T.J. Mercier <tjmercier@xxxxxxxxxx>
> > > > > Reviewed-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> > > > > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> > > > >
> > > > > ---
> > > > > v3: Formatting fixes per Yosry Ahmed and Johannes Weiner. No functional
> > > > > changes.
> > > > > v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný
> > > > >
> > > > >  mm/memcontrol.c | 6 ++++--
> > > > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > > > index 46d8d02114cf..f6ab61128869 100644
> > > > > --- a/mm/memcontrol.c
> > > > > +++ b/mm/memcontrol.c
> > > > > @@ -6976,9 +6976,11 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> > > > >               if (!nr_retries)
> > > > >                       lru_add_drain_all();
> > > > >
> > > > > +             /* Will converge on zero, but reclaim enforces a minimum */
> > > > > +             unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4;
> > > >
> > > > This doesn't fit into the existing coding style. I do not think there is
> > > > a strong reason to go against it here.
> > >
> > > There's been some back and forth here. You'd prefer to move this to
> > > the top of the while loop, under the declaration of reclaimed? It's
> > > farther from its use there, but it does match the existing style in
> > > the file better.
> >
> > This is not something I deeply care about but generally it is better to
> > not mix styles unless that is a clear win. If you want to save one LOC
> > you can just move it up - just couple of lines up, or you can keep the
> > definition closer and have a separate declaration.
> 
> I find it nicer to have to search as little as possible for both the
> declaration (type) and definition, but I am not attached to it either
> and it's not worth annoying anyone over here. Let's move it up like
> Yosry suggested initially.
> 
> > > > > +
> > > > >               reclaimed = try_to_free_mem_cgroup_pages(memcg,
> > > > > -                                     min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
> > > > > -                                     GFP_KERNEL, reclaim_options);
> > > > > +                                     batch_size, GFP_KERNEL, reclaim_options);
> > > >
> > > > Also with the increased reclaim target do we need something like this?
> > > >
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index 4f9c854ce6cc..94794cf5ee9f 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -1889,7 +1889,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
> > > >
> > > >                 /* We are about to die and free our memory. Return now. */
> > > >                 if (fatal_signal_pending(current))
> > > > -                       return SWAP_CLUSTER_MAX;
> > > > +                       return sc->nr_to_reclaim;
> > > >         }
> > > >
> > > >         lru_add_drain();
> > > > >
> > > > >               if (!reclaimed && !nr_retries--)
> > > > >                       return -EAGAIN;
> > > > > --
> > >
> > > This is interesting, but I don't think it's closely related to this
> > > change. This section looks like it was added to delay OOM kills due to
> > > apparent lack of reclaim progress when pages are isolated and the
> > > direct reclaimer is scheduled out. A couple things:
> > >
> > > In the context of proactive reclaim, current is not really undergoing
> > > reclaim due to memory pressure. It's initiated from userspace. So
> > > whether it has a fatal signal pending or not doesn't seem like it
> > > should influence the return value of shrink_inactive_list for some
> > > probably unrelated process. It seems more straightforward to me to
> > > return 0, and add another fatal signal pending check to the caller
> > > (shrink_lruvec) to bail out early (dealing with OOM kill avoidance
> > > there if necessary) instead of waiting to accumulate fake
> > > SWAP_CLUSTER_MAX values from shrink_inactive_list.
> >
> > The point of this code is to bail out early if the caller has fatal
> > signals pending. That could be SIGTERM sent to the process performing
> > the reclaim for whatever reason. The bail out is tuned for
> > SWAP_CLUSTER_MAX as you can see and your patch is increasing the reclaim
> > target which means that bailout wouldn't work properly and you wouldn't
> > get any useful work done but not really bail out.
> 
> It's increasing to 1/4 of what it was 6 months ago before 88536ac291
> ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") and
> this hasn't changed since then, so if anything the bailout should
> happen quicker than originally tuned for.

Yes, this wasn't handled properly back then either.

> > > As far as changing the value, SWAP_CLUSTER_MAX puts the final value of
> > > sc->nr_reclaimed pretty close to sc->nr_to_reclaim. Since there's a
> > > loop for each evictable lru in shrink_lruvec, we could end up with 4 *
> > > sc->nr_to_reclaim in sc->nr_reclaimed if we switched to
> > > sc->nr_to_reclaim from SWAP_CLUSTER_MAX... an even bigger lie. So I
> > > don't think we'd want to do that.
> >
> > The actual number returned from the reclaim is not really important
> > because memory_reclaim would break out of the loop and userspace would
> > never see the result.
> 
> This makes sense, but it makes me uneasy. I can't point to anywhere
> this would cause a problem currently (except maybe super unlikely
> overflow of nr_reclaimed), but it feels like a setup for future
> unintended consequences.

This of something like
timeout $TIMEOUT echo $TARGET > $MEMCG_PATH/memory.reclaim
where timeout acts as a stop gap if the reclaim cannot finish in
TIMEOUT.

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux