Re: [PATCH V6 00/10] memcg: per cgroup background reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Apr 22, 2011 at 7:34 PM, Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
On Fri, Apr 22, 2011 at 07:10:25PM -0700, Ying Han wrote:
> On Fri, Apr 22, 2011 at 6:35 PM, Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> > On Wed, Apr 20, 2011 at 10:28:17PM -0700, Ying Han wrote:
> > > On Wed, Apr 20, 2011 at 10:08 PM, Johannes Weiner <hannes@xxxxxxxxxxx
> > >wrote:
> > > > On Thu, Apr 21, 2011 at 01:00:16PM +0900, KAMEZAWA Hiroyuki wrote:
> > > > > I don't think its a good idea to kick kswapd even when free memory is
> > > > enough.
> > > >
> > > > This depends on what kswapd is supposed to be doing.  I don't say we
> > > > should reclaim from all memcgs (i.e. globally) just because one memcg
> > > > hits its watermark, of course.
> > > >
> > > > But the argument was that we need the watermarks configurable to force
> > > > per-memcg reclaim even when the hard limits are overcommitted, because
> > > > global reclaim does not do a fair job to balance memcgs.
> > >
> > > There seems to be some confusion here. The watermark we defined is
> > > per-memcg, and that is calculated
> > > based on the hard_limit. We need the per-memcg wmark the same reason of
> > > per-zone wmart which triggers
> > > the background reclaim before direct reclaim.
> >
> > Of course, I am not arguing against the watermarks.  I am just
> > (violently) against making them configurable from userspace.
> >
> > > There is a patch in my patchset which adds the tunable for both
> > > high/low_mark, which gives more flexibility to admin to config the host.
> > In
> > > over-commit environment, we might never hit the wmark if all the wmarks
> > are
> > > set internally.
> >
> > And my point is that this should not be a problem at all!  If the
> > watermarks are not physically reachable, there is no reason to reclaim
> > on behalf of them.
> >
> > In such an environment, global memory pressure arises before the
> > memcgs get close to their hard limit, and global memory pressure
> > reduction should do the right thing and equally push back all memcgs.
> >
> > Flexibility in itself is not an argument.  On the contrary.  We commit
> > ourselves to that ABI and have to maintain this flexibility forever.
> > Instead, please find a convincing argument for the flexibility itself,
> > other than the need to workaround the current global kswapd reclaim.

[fixed following quotation]

> Ok, I tend to agree with you now that the over-commit example i gave
> early is a weak argument. We don't need to provide the ability to
> reclaim from a memcg before it is reaching its wmarks in over-commit
> environment.

Yep.  If it is impossible to reach the hard limit, it can't possibly
be a source of latency.

> However, i still think there is a need from the admin to have some controls
> of which memcg to do background reclaim proactively (before global memory
> pressure) and that was the initial logic behind the API.

That sounds more interesting.  Do you have a specific use case that
requires this?

There might be more interesting use cases there, and here is one I can think of:

let's say we three jobs A, B and C, and one host with 32G of RAM. We configure each job's hard_limit as their peak memory usage.
A: 16G
B: 16G
C: 10G

1. we start running A with hard_limit 15G, and start running B with hard_limit 15G.
2. we set A and B's soft_limit based on their "hot" memory. Let's say setting A's soft_limit 10G and B's soft_limit 10G. 
(The soft_limit will be changing based on their runtime memory usage)

If no more jobs running on the system, A and B will easily fill up the whole system with pagecache pages. Since we are not over-committing the machine with their hard_limit, there will be no pressure to push their memory usage down to soft_limit. 

Now we would like to launch another job C, since we know there are A(16G - 10G) + B(16G - 10G)  = 12G "cold" memory can be reclaimed (w/o impacting the A and B's performance). So what will happen

1. start running C on the host, which triggers global memory pressure right away. If the reclaim is fast, C start growing with the free pages from A and B.

However, it might be possible that the reclaim can not catch-up with the job's page allocation. We end up with either OOM condition or performance spike on any of the running jobs.

One way to improve it is to set a wmark on either A/B to be proactively reclaiming pages before launching C. The global memory pressure won't help much here since we won't trigger that.

 
min_free_kbytes more or less indirectly provides the same on a global
level, but I don't think anybody tunes it just for aggressiveness of
background reclaim.

Hmm, we do scale that in google workload. With large machines under lots of memory pressure and heavily network traffic workload, we would like to reduce the likelyhood of page alloc failure. But this is kind of different from what we are talking about here.

--Ying 

 

> > (I fixed up the following quotation, please be more careful when
> > replying, this makes it so hard to follow your emails.  thanks!)

^^^^

> > > > My counter proposal is to fix global reclaim instead and apply
> > > > equal pressure on memcgs, such that we never have to tweak
> > > > per-memcg > > watermarks to achieve the same thing.
> > >
> > > We still need this and that is the soft_limit reclaim under global
> > > background reclaim.
> >
> > I don't understand what you mean by that.  Could you elaborate?
>
> Sorry I think I misunderstood your early comment. What I pointed out here
> was that we need both per-memcg
> background reclaim and global soft_limit reclaim. I don't think we have
> disagreement on that at this point.

Ah, got you, thanks.

       Hannes


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]