Re: [PATCH] mm, memcg: do full scan initially in force_empty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 3, 2020 at 9:56 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
>
> On Mon 03-08-20 21:20:44, Yafang Shao wrote:
> > On Mon, Aug 3, 2020 at 6:12 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > >
> > > On Fri 31-07-20 09:50:04, Yafang Shao wrote:
> > > > On Thu, Jul 30, 2020 at 7:26 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > > > >
> > > > > On Tue 28-07-20 03:40:32, Yafang Shao wrote:
> > > > > > Sometimes we use memory.force_empty to drop pages in a memcg to work
> > > > > > around some memory pressure issues. When we use force_empty, we want the
> > > > > > pages can be reclaimed ASAP, however force_empty reclaims pages as a
> > > > > > regular reclaimer which scans the page cache LRUs from DEF_PRIORITY
> > > > > > priority and finally it will drop to 0 to do full scan. That is a waste
> > > > > > of time, we'd better do full scan initially in force_empty.
> > > > >
> > > > > Do you have any numbers please?
> > > > >
> > > >
> > > > Unfortunately the number doesn't improve obviously, while it is
> > > > directly proportional to the numbers of total pages to be scanned.
> > >
> > > Your changelog claims an optimization and that should be backed by some
> > > numbers. It is true that reclaim at a higher priority behaves slightly
> > > and subtly differently but that urge for even more details in the
> > > changelog.
> > >
> >
> > With the below addition change (nr_to_scan also changed), the elapsed
> > time of force_empty can be reduced by 10%.
> >
> > @@ -3208,6 +3211,7 @@ static inline bool memcg_has_children(struct
> > mem_cgroup *memcg)
> >  static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
> >  {
> >         int nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
> > +       unsigned long size;
> >
> >         /* we call try-to-free pages for make this cgroup empty */
> >         lru_add_drain_all();
> > @@ -3215,14 +3219,15 @@ static int mem_cgroup_force_empty(struct
> > mem_cgroup *memcg)
> >         drain_all_stock(memcg);
> >         /* try to free all pages in this cgroup */
> > -       while (nr_retries && page_counter_read(&memcg->memory)) {
> > +       while (nr_retries && (size = page_counter_read(&memcg->memory))) {
> >                 int progress;
> >
> >                 if (signal_pending(current))
> >                         return -EINTR;
> > -               progress = try_to_free_mem_cgroup_pages(memcg, 1,
> > -                                                       GFP_KERNEL, true);
> > +               progress = try_to_free_mem_cgroup_pages(memcg, size,
> > +                                                       GFP_KERNEL, true,
> > +                                                       0);
>
> Have you tried this change without changing the reclaim priority?
>

I tried it again. Seems the improvement is mostly due to the change of
nr_to_reclaim, rather the reclaim priority,

-               progress = try_to_free_mem_cgroup_pages(memcg, 1,
+               progress = try_to_free_mem_cgroup_pages(memcg, size,


> > Below are the numbers for a 16G memcg with full clean pagecache.
> > Without these change,
> > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty
> > real    0m2.247s
> > user    0m0.000s
> > sys     0m1.722s
> >
> > With these change,
> > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty
> > real    0m2.053s
> > user    0m0.000s
> > sys     0m1.529s
> >
> > But I'm not sure whether we should make this improvement, because
> > force_empty is not a critical path.
>
> Well, an isolated change to force_empty would be more acceptable but it
> is worth noting that a very large reclaim target might affect the
> userspace triggering this path because it will potentially increase
> latency to process any signals. I do not expect this to be a huge
> problem in practice because even reclaim for a smaller target can take
> quite long if the memory is not really reclaimable and it has to take
> the full world scan. Moreovere most userspace will simply do
> echo 1 > $MEMCG_PAGE/force_empty
> and only care about killing that if it takes too long.
>

We may do it in a script to force empty many memcgs at the same time.
Of course we can measure the time it takes to force empty, but that
will be complicated.

> > > > But then I notice that force_empty will try to write dirty pages, that
> > > > is not expected by us, because this behavior may be dangerous in the
> > > > production environment.
> > >
> > > I do not understand your claim here. Direct reclaim doesn't write dirty
> > > page cache pages directly.
> >
> > It will write dirty pages once the sc->priority drops to a very low number.
> > if (sc->priority < DEF_PRIORITY - 2)
> >     sc->may_writepage = 1;
>
> OK, I see what you mean now. Please have a look above that check:
>                         /*
>                          * Only kswapd can writeback filesystem pages
>                          * to avoid risk of stack overflow. But avoid
>                          * injecting inefficient single-page IO into
>                          * flusher writeback as much as possible: only
>                          * write pages when we've encountered many
>                          * dirty pages, and when we've already scanned
>                          * the rest of the LRU for clean pages and see
>                          * the same dirty pages again (PageReclaim).
>                          */
>
> > >  And it is even less clear why that would be
> > > dangerous if it did.
> > >
> >
> > It will generate many IOs, which may block the others.
> >
> > > > What do you think introducing per memcg drop_cache ?
> > >
> > > I do not like the global drop_cache and per memcg is not very much
> > > different. This all shouldn't be really necessary because we do have
> > > means to reclaim memory in a memcg.
> > > --
> >
> > We used to find an issue that there are many negative  dentries in some memcgs.
>
> Yes, negative dentries can build up but the memory reclaim should be
> pretty effective reclaiming them.
>
> > These negative dentries were introduced by some specific workload in
> > these memcgs,  and we want to drop them as soon as possible.
> > But unfortunately there is no good way to drop them except the
> > force_empy or global drop_caches.
>
> You can use memcg limits (e.g. memory high) to pro-actively reclaim
> excess memory. Have you tried that?
>
> > The force_empty will also drop the pagecache pages, which is not
> > expected by us.
>
> force_empty is intended to reclaim _all_ pages.
>
> > The global drop_caches can't work either because it will drop slabs in
> > other memcgs.
> > That is why I want to introduce per memcg drop_caches.
>
> Problems with negative dentries has been already discussed in the past.
> I believe there was no conclusion so far. Please try to dig into
> archives.

I have read the proposal of Waiman. But it seems there isn't a conclusion yet.
If the kernel can't fix this issue perfectly, then giving the user a
chance to work around it would be a possible solution - drop_caches is
that kind of workaround.

[ adding Waiman to CC ]


-- 
Thanks
Yafang




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux