Re: [PATCH 5/7] memcg bgreclaim core.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 26 Apr 2011 16:15:04 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> On Mon, Apr 25, 2011 at 10:08 PM, KAMEZAWA Hiroyuki <
> kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:

> > > I see the MEMCG_BGSCAN_LIMIT is a newly defined macro from previous
> > > post. So, now the number of pages to scan is capped on 2k for each
> > > memcg, and does it make difference on big vs small cgroup?
> > >
> >
> > Now, no difference. One reason is because low_watermark - high_watermark is
> > limited to 4MB, at most. It should be static 4MB in many cases and 2048
> > pages
> > is for scanning 8MB, twice of low_wmark - high_wmark. Another reason is
> > that I didn't have enough time for considering to tune this.
> > By MEMCG_BGSCAN_LIMIT, round-robin can be simply fair and I think it's a
> > good start point.
> >
> 
> I can see a problem here to be "fair" to each memcg. Each container has
> different sizes and running with
> different workloads. Some of them are more sensitive with latency than the
> other, so they are willing to pay
> more cpu cycles to do background reclaim.
> 

Hmm, I think care for it can be added easily. But...

> So, here we fix the amount of work per-memcg, and the performance for those
> jobs will be hurt. If i understand
> correctly, we only have one workitem on the workqueue per memcg. So which
> means we can only reclaim those amount of pages for each iteration. And if
> the queue is big, those jobs(heavy memory allocating, and willing to pay cpu
> to do bg reclaim) will hit direct reclaim more than necessary.
> 

But, from measurements, we cannot reclaim enough memory on time if the work
is busy. Can you think of 'make -j 8' doesn't hit the limit by bgreclaim ?

'Working hard' just adds more CPU consumption and results more latency.
>From my point of view, if direct reclaim has problematic costs, bgreclaim is
not easy and slow, too. Then, 'work harder' cannot be help. And spike of
memory consumption can be very rapid. If an application exec an application
which does malloc(2G), under 1G limit memcg, we cannot avoid direct reclaim.

I think the user can set limit higher and distance between limit <-> wmark large.
Then, he can gain more time and avoid hitting direct relcaim. How about enlarging
limit <-> wmark range for performance intensive jobs ?
Amount of work per memcg is limit <-> wmark range, I guess.

Thanks,
-Kame











--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]