Re: memory cgroup pagecache and inode problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 10, 2019 at 12:30 AM Fam Zheng <zhengfeiran@xxxxxxxxxxxxx> wrote:
>
>
>
> > On Jan 10, 2019, at 13:36, Yang Shi <shy828301@xxxxxxxxx> wrote:
> >
> > On Sun, Jan 6, 2019 at 9:10 PM Fam Zheng <zhengfeiran@xxxxxxxxxxxxx> wrote:
> >>
> >>
> >>
> >>> On Jan 5, 2019, at 03:36, Yang Shi <shy828301@xxxxxxxxx> wrote:
> >>>
> >>>
> >>> drop_caches would drop all page caches globally. You may not want to
> >>> drop the page caches used by other memcgs.
> >>
> >> We’ve tried your async force_empty patch (with a modification to default it to true to make it transparently enabled for the sake of testing), and for the past few days the stale mem cgroups still accumulate, up to 40k.
> >>
> >> We’ve double checked that the force_empty routines are invoked when a mem cgroup is offlined. But this doesn’t look very effective so far. Because, once we do `echo 1 > /proc/sys/vm/drop_caches`, all the groups immediately go away.
> >>
> >> This is a bit unexpected.
> >>
> >> Yang, could you hint what are missing in the force_empty operation, compared to a blanket drop cache?
> >
> > Drop caches does invalidate pages inode by inode. But, memcg
> > force_empty does call memcg direct reclaim.
>
> But force_empty touches things that drop_caches doesn’t? If so then maybe combining both approaches is more reliable. Since like you said,

AFAICS, force_empty may unmap pages, but drop_caches doesn't.

> dropping _all_ pages is usually too much thus not desired, we may want to somehow limit the dropped caches to those that are in the memory cgroup in question. What do you think?

This is what force_empty is supposed to do.  But, as your test shows
some page cache may still remain after force_empty, then cause offline
memcgs accumulated.  I haven't figured out what happened.  You may try
what Michal suggested.

Yang

>
>
> >
> > Offlined memcgs will not go away if there is still page charged. Maybe
> > relate to per cpu memcg stock. I recall there are some commits which
> > do solve the per cpu page counter cache problem.
> >
> > 591edfb10a94 mm: drain memcg stocks on css offlining
> > d12c60f64cf8 mm: memcontrol: drain memcg stock on force_empty
> > bb4a7ea2b144 mm: memcontrol: drain stocks on resize limit
> >
> > Not sure if they would help out.
>
> These are all in 4.20, which is tested but not helpful.
>
> Fam
>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux