Re: [PATCH] mm, memcg: reduce size of struct mem_cgroup by using bit field

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Dec 28, 2019 at 7:55 AM Roman Gushchin <guro@xxxxxx> wrote:
>
> On Fri, Dec 27, 2019 at 07:43:52AM -0500, Yafang Shao wrote:
> > There are some members in struct mem_group can be either 0(false) or
> > 1(true), so we can define them using bit field to reduce size. With this
> > patch, the size of struct mem_cgroup can be reduced by 64 bytes in theory,
> > but as there're some MEMCG_PADDING()s, the real number may be different,
> > which is relate with the cacheline size. Anyway, this patch could reduce
> > the size of struct mem_cgroup more or less.
> >
> > Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx>
> > Cc: Roman Gushchin <guro@xxxxxx>
> > ---
> >  include/linux/memcontrol.h | 21 ++++++++++++---------
> >  1 file changed, 12 insertions(+), 9 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index a7a0a1a5..f68a9ef 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -229,20 +229,26 @@ struct mem_cgroup {
> >       /*
> >        * Should the accounting and control be hierarchical, per subtree?
> >        */
> > -     bool use_hierarchy;
> > +     unsigned int use_hierarchy : 1;
> > +
> > +     /* Legacy tcp memory accounting */
> > +     unsigned int tcpmem_active : 1;
> > +     unsigned int tcpmem_pressure : 1;
> >
> >       /*
> >        * Should the OOM killer kill all belonging tasks, had it kill one?
> >        */
> > -     bool oom_group;
> > +     unsigned int  oom_group : 1;
> >
> >       /* protected by memcg_oom_lock */
> > -     bool            oom_lock;
> > -     int             under_oom;
> > +     unsigned int oom_lock : 1;
>
> Hm, looking at the original code, it was clear that oom_lock
> and under_oom are protected with memcg_oom_lock; but not oom_kill_disable.
>
> This information seems to be lost.
>

Should add this comment. Thanks for pointing this out.

> Also, I'd look at the actual memory savings. Is it worth it?
> Or it's all eaten by the padding.
>

As explained in the commit log, the real size depends on the cacheline size,
and in the future we may introduce other new bool members.
I have verified it on my server with 64B-cacheline, and the saveing is 0.

Actually there's no strong reason to make this minor optimization.

> Thanks!
>
> >
> > -     int     swappiness;
> >       /* OOM-Killer disable */
> > -     int             oom_kill_disable;
> > +     unsigned int oom_kill_disable : 1;
> > +
> > +     int under_oom;
> > +
> > +     int     swappiness;
> >
> >       /* memory.events and memory.events.local */
> >       struct cgroup_file events_file;
> > @@ -297,9 +303,6 @@ struct mem_cgroup {
> >
> >       unsigned long           socket_pressure;
> >
> > -     /* Legacy tcp memory accounting */
> > -     bool                    tcpmem_active;
> > -     int                     tcpmem_pressure;
> >
> >  #ifdef CONFIG_MEMCG_KMEM
> >          /* Index in the kmem_cache->memcg_params.memcg_caches array */
> > --
> > 1.8.3.1
> >




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux