Re: [PATCH 0/7] Split list_lru lock into per-cgroup scope

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 25, 2024 at 5:26 AM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Tue, 25 Jun 2024 01:53:06 +0800 Kairui Song <ryncsn@xxxxxxxxx> wrote:
>
> > Currently, every list_lru has a per-node lock that protects adding,
> > deletion, isolation, and reparenting of all list_lru_one instances
> > belonging to this list_lru on this node. This lock contention is heavy
> > when multiple cgroups modify the same list_lru.
> >
> > This can be alleviated by splitting the lock into per-cgroup scope.
>
> I'm wavering over this.  We're at -rc5 and things generally feel a bit
> unstable at present.
>
> The performance numbers are nice for extreme workloads, but can you
> suggest how much benefit users will see in more typical workloads?

Hi, the contention issue might be minor if the memory stress is low,
but still beneficial, and this series optimizes the cgroup
initialization too.

The memhog test I provided is tested on a 32 core system with 64
cgroups (I forgot to provide this detail, sry), not a very extreme
configuration actually, considering it's not rare to have thousands of
cgroups on a system nowadays. They all sharing a global lock is
definitely not a good idea.

The issue is barely observable for things like desktop usage though.

>
> Anyway, opinions are sought and I'd ask people to please review this
> work promptly if they feel is it sufficiently beneficial.

More reviews are definitely beneficial.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux