Re: [PATCH 0/7] memcg background reclaim , yet another one.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 27 Apr 2011 20:55:49 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> On Tue, Apr 26, 2011 at 1:47 AM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> > On Tue, 26 Apr 2011 01:43:17 -0700
> > Ying Han <yinghan@xxxxxxxxxx> wrote:
> >
> >> On Tue, Apr 26, 2011 at 12:43 AM, KAMEZAWA Hiroyuki <
> >> kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >>
> >> > On Tue, 26 Apr 2011 00:19:46 -0700
> >> > Ying Han <yinghan@xxxxxxxxxx> wrote:
> >> >
> >> > > On Mon, Apr 25, 2011 at 6:38 PM, KAMEZAWA Hiroyuki
> >> > > <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >> > > > On Mon, 25 Apr 2011 15:21:21 -0700
> >> > > > Ying Han <yinghan@xxxxxxxxxx> wrote:
> >
> >>
> >> > To clarify a bit, my question was meant to account it but not necessary to
> >> > limit it. We can use existing cpu cgroup to do the cpu limiting, and I am
> >> >
> >> just wondering how to configure it for the memcg kswapd thread.
> >>
> >> Â ÂLet's say in the per-memcg-kswapd model, i can echo the kswapd thread pid
> >> into the cpu cgroup ( the same set of process of memcg, but in a cpu
> >> limiting cgroup instead). ÂIf the kswapd is shared, we might need extra work
> >> to account the cpu cycles correspondingly.
> >>
> >
> > Hm ? statistics of elapsed_time isn't enough ?
> >
> > Now, I think limiting scan/sec interface is more promissing rather than time
> > or thread controls. It's easier to understand.
> 
> I think it will work on the cpu accounting by recording the
> elapsed_time per memcg workitem.
> 
> But, we might still need the cpu throttling as well. To give one use
> cases from google, we'd rather kill a low priority job for running
> tight on memory rather than having its reclaim thread affecting the
> latency of high priority job. It is quite easy to understand how to
> accomplish that in per-memcg-per-kswapd model, but harder in the
> shared workqueue model. It is straight-forward to read  the cpu usage
> by the cpuacct.usage* and limit the cpu usage by setting cpu.shares.
> One concern we have here is the scan/sec implementation will make
> things quite complex.
> 

I think you should check how distance between limit<->hiwater works
before jumping onto cpu scheduler. If you can see a memcg's bgreclaim is
cpu hogging, you can stop it easily by setting limit==hiwat. per-memcg
statistics seems enough for me. I don't like splitting up features
between cgroups, more. "To reduce cpu usage by memcg, please check
cpu cgroup and...." how complex it is! Do you remember what Hugh Dickins
pointed out at LSF ? It's a big concern.

Setting up of combination of cgroup subsys is too complex.

Thanks,
-Kame



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]