Re: [PATCH] memcg: use ratelimited stats flush in the reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 24, 2024 at 10:15:38AM GMT, Yosry Ahmed wrote:
> On Mon, Jun 24, 2024 at 10:02 AM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
> >
> > On Mon, Jun 24, 2024 at 05:57:51AM GMT, Yosry Ahmed wrote:
> > > > > and I will explain why below. I know it may be a necessary
> > > > > evil, but I would like us to make sure there is no other option before
> > > > > going forward with this.
> > > >
> > > > Instead of necessary evil, I would call it a pragmatic approach i.e.
> > > > resolve the ongoing pain with good enough solution and work on long term
> > > > solution later.
> > >
> > > It seems like there are a few ideas for solutions that may address
> > > longer-term concerns, let's make sure we try those out first before we
> > > fall back to the short-term mitigation.
> > >
> >
> > Why? More specifically why try out other things before this patch? Both
> > can be done in parallel. This patch has been running in production at
> > Meta for several weeks without issues. Also I don't see how merging this
> > would impact us on working on long term solutions.
> 
> The problem is that once this is merged, it will be difficult to
> change this back to a normal flush once other improvements land. We
> don't have a test that reproduces the problem that we can use to make
> sure it's safe to revert this change later, it's only using data from
> prod.
> 

I am pretty sure the work on long term solution would be iterative which
will involve many reverts and redoing things differently. So, I think it
is understandable that we may need to revert or revert the reverts.

> Once this mitigation goes in, I think everyone will be less motivated
> to get more data from prod about whether it's safe to revert the
> ratelimiting later :)

As I said I don't expect "safe in prod" as a strict requirement for a
change.

> 
> >
> > [...]
> > >
> > > Thanks for explaining this in such detail. It does make me feel
> > > better, but keep in mind that the above heuristics may change in the
> > > future and become more sensitive to stale stats, and very likely no
> > > one will remember that we decided that stale stats are fine
> > > previously.
> > >
> >
> > When was the last time this heuristic change? This heuristic was
> > introduced in 2008 for anon pages and extended to file pages in 2016. In
> > 2019 the ratio enforcement at 'reclaim root' was introduce. I am pretty
> > sure we will improve the whole rstat flushing thing within a year or so
> > :P
> 
> Fair point, although I meant it's easy to miss that the flush is
> ratelimited and the stats are potentially stale in general :)
> 
> >
> > > >
> > > > For the cache trim mode, inactive file LRU size is read and the kernel
> > > > scales it down based on the reclaim iteration (file >> sc->priority) and
> > > > only checks if it is zero or not. Again precise information is not
> > > > needed.
> > >
> > > It sounds like it is possible that we enter the cache trim mode when
> > > we shouldn't if the stats are stale. Couldn't this lead to
> > > over-reclaiming file memory?
> > >
> >
> > Can you explain how this over-reclaiming file will happen?
> 
> In one reclaim iteration, we could flush the stats, read the inactive
> file LRU size, confirm that (file >> sc->priority) > 0 and enter the
> cache trim mode, reclaiming file memory only. Let's assume that we
> reclaimed enough file memory such that the condition (file >>
> sc->priority) > 0 does not hold anymore.
> 
> In a subsequent reclaim iteration, the flush could be skipped due to
> ratelimiting. Now we will enter the cache trim mode again and reclaim
> file memory only, even though the actual amount of file memory is low.
> This will cause over-reclaiming from file memory and dismissing anon
> memory that we should have reclaimed, which means that we will need
> additional reclaim iterations to actually free memory.
> 
> I believe this scenario would be possible with ratelimiting, right?
> 

So, the (old_file >> sc->priority) > 0 is true but the (new_file >>
sc->priority) > is false. In the next iteration, (old_file >>
(sc->priority-1)) > 0 will still be true but somehow (new_file >>
(sc->priority-1)) > 0 is false. It can happen if in the previous
iteration, somehow kernel has reclaimed more than double what it was
supposed to reclaim or there are concurrent reclaimers. In addition the
nr_reclaim is still less than nr_to_reclaim and there is no file
deactivation request.

Yeah it can happen but a lot of wierd conditions need to happen
concurrently for this to happen.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux