Re: [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 22, 2024 at 1:02 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
>
> On Fri, Jul 19, 2024 at 09:52:17PM GMT, Yosry Ahmed wrote:
> > On Fri, Jul 19, 2024 at 3:48 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
> > >
> > > On Fri, Jul 19, 2024 at 09:54:41AM GMT, Jesper Dangaard Brouer wrote:
> > > >
> > > >
> > > > On 19/07/2024 02.40, Shakeel Butt wrote:
> > > > > Hi Jesper,
> > > > >
> > > > > On Wed, Jul 17, 2024 at 06:36:28PM GMT, Jesper Dangaard Brouer wrote:
> > > > > >
> > > > > [...]
> > > > > >
> > > > > >
> > > > > > Looking at the production numbers for the time the lock is held for level 0:
> > > > > >
> > > > > > @locked_time_level[0]:
> > > > > > [4M, 8M)     623 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@               |
> > > > > > [8M, 16M)    860 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
> > > > > > [16M, 32M)   295 |@@@@@@@@@@@@@@@@@                                   |
> > > > > > [32M, 64M)   275 |@@@@@@@@@@@@@@@@                                    |
> > > > > >
> > > > >
> > > > > Is it possible to get the above histogram for other levels as well?
> > > >
> > > > Data from other levels available in [1]:
> > > >  [1]
> > > > https://lore.kernel.org/all/8c123882-a5c5-409a-938b-cb5aec9b9ab5@xxxxxxxxxx/
> > > >
> > > > IMHO the data shows we will get most out of skipping level-0 root-cgroup
> > > > flushes.
> > > >
> > >
> > > Thanks a lot of the data. Are all or most of these locked_time_level[0]
> > > from kswapds? This just motivates me to strongly push the ratelimited
> > > flush patch of mine (which would be orthogonal to your patch series).
> >
> > Jesper and I were discussing a better ratelimiting approach, whether
> > it's measuring the time since the last flush, or only skipping if we
> > have a lot of flushes in a specific time frame (using __ratelimit()).
> > I believe this would be better than the current memcg ratelimiting
> > approach, and we can remove the latter.
> >
> > WDYT?
>
> The last statement gives me the impression that you are trying to fix
> something that is not broken. The current ratelimiting users are ok, the
> issue is with the sync flushers. Or maybe you are suggesting that the new
> ratelimiting will be used for all sync flushers and current ratelimiting
> users and the new ratelimiting will make a good tradeoff between the
> accuracy and potential flush stall?

The latter. Basically the idea is to have more informed and generic
ratelimiting logic in the core rstat flushing code (e.g. using
__ratelimit()), which would apply to ~all flushers*. Then, we ideally
wouldn't need mem_cgroup_flush_stats_ratelimited() at all.

*The obvious exception is the force flushing case we discussed for
cgroup_rstat_exit().

In fact, I think we need that even with the ongoing flusher
optimization, because I think there is a slight chance that a flush is
missed. It wouldn't be problematic for other flushers, but it
certainly can be for cgroup_rstat_exit() as the stats will be
completely dropped.

The scenario I have in mind is:
- CPU 1 starts a flush of cgroup A. Flushing complete, but waiters are
not woke up yet.
- CPU 2 updates the stats of cgroup A after it is flushed by CPU 1.
- CPU 3 calls cgroup_rstat_exit(), sees the ongoing flusher and waits.
- CPU 1 wakes up the waiters.
- CPU 3 proceeds to destroy cgroup A, and the updates made by CPU 2 are lost.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux