Re: memory cgroup pagecache and inode problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 6, 2019 at 9:10 PM Fam Zheng <zhengfeiran@xxxxxxxxxxxxx> wrote:
>
>
>
> > On Jan 5, 2019, at 03:36, Yang Shi <shy828301@xxxxxxxxx> wrote:
> >
> >
> > drop_caches would drop all page caches globally. You may not want to
> > drop the page caches used by other memcgs.
>
> We’ve tried your async force_empty patch (with a modification to default it to true to make it transparently enabled for the sake of testing), and for the past few days the stale mem cgroups still accumulate, up to 40k.
>
> We’ve double checked that the force_empty routines are invoked when a mem cgroup is offlined. But this doesn’t look very effective so far. Because, once we do `echo 1 > /proc/sys/vm/drop_caches`, all the groups immediately go away.
>
> This is a bit unexpected.
>
> Yang, could you hint what are missing in the force_empty operation, compared to a blanket drop cache?

Drop caches does invalidate pages inode by inode. But, memcg
force_empty does call memcg direct reclaim.

Offlined memcgs will not go away if there is still page charged. Maybe
relate to per cpu memcg stock. I recall there are some commits which
do solve the per cpu page counter cache problem.

591edfb10a94 mm: drain memcg stocks on css offlining
d12c60f64cf8 mm: memcontrol: drain memcg stock on force_empty
bb4a7ea2b144 mm: memcontrol: drain stocks on resize limit

Not sure if they would help out.

Yang

>
> Fam





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux