Re: [PATCH V4 00/10] memcg: per cgroup background reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 18-04-11 10:01:20, Ying Han wrote:
> On Mon, Apr 18, 2011 at 2:13 AM, Michal Hocko <mhocko@xxxxxxx> wrote:
[...]
> > I see. I am just concerned whether 3rd level of reclaim is a good idea.
> > We would need to do background reclaim anyway (and to preserve the
> > original semantic it has to be somehow watermark controlled). I am just
> > wondering why we have to implement it separately from kswapd. Cannot we
> > just simply trigger global kswapd which would reclaim all cgroups that
> > are under watermarks? [I am sorry for my ignorance if that is what is
> > implemented in the series - I haven't got to the patches yes]
> >
> 
> They are different on per-zone reclaim vs per-memcg reclaim. The first
> one is triggered if the zone is under memory pressure and we need
> to free pages to serve further page allocations.  The second one is
> triggered if the memcg is under memory pressure and we need to free
> pages to leave room (limit - usage) for the memcg to grow.

OK, I see.


> 
> Both of them are needed and that is how it is implemented on the direct
> reclaim path. The kswapd batches only try to
> smooth out the system and memcg performance by reclaiming pages proactively.
> It doesn't affecting the functionality.

I am still wondering, isn't this just a nice to have feature rather than
must to have in order to get rid of the global LRU? Doesn't it make
transition more complicated. I have noticed many if-else in kswapd path to
distinguish per-cgroup from the traditional global background reclaim.

[...]

> > > > > Step1: Create a cgroup with 500M memory_limit.
> > > > > $ mkdir /dev/cgroup/memory/A
> > > > > $ echo 500m >/dev/cgroup/memory/A/memory.limit_in_bytes
> > > > > $ echo $$ >/dev/cgroup/memory/A/tasks
> > > > >
> > > > > Step2: Test and set the wmarks.
> > > > > $ cat /dev/cgroup/memory/A/memory.low_wmark_distance
> > > > > 0
> > > > > $ cat /dev/cgroup/memory/A/memory.high_wmark_distance
> > > > > 0
> > > >
> > > >
> > > They are used to tune the high/low_marks based on the hard_limit. We
> > might
> > > need to export that configuration to user admin especially on machines
> > where
> > > they over-commit by hard_limit.
> >
> > I remember there was some resistance against tuning watermarks
> > separately.
> >
> 
> This API is based on KAMEZAWA's request. :)

This was just as FYI. Watermarks were considered internal thing. So I
wouldn't be surprised if this got somehow controversial.

> 
> >
> > > > > $ cat /dev/cgroup/memory/A/memory.reclaim_wmarks
> > > > > low_wmark 524288000
> > > > > high_wmark 524288000
> > > > >
> > > > > $ echo 50m >/dev/cgroup/memory/A/memory.high_wmark_distance
> > > > > $ echo 40m >/dev/cgroup/memory/A/memory.low_wmark_distance
> > > > >
> > > > > $ cat /dev/cgroup/memory/A/memory.reclaim_wmarks
> > > > > low_wmark  482344960
> > > > > high_wmark 471859200
> > > >
> > > > low_wmark is higher than high_wmark?
> > > >
> > >
> > > hah, it is confusing. I have them documented. Basically, low_wmark
> > > triggers reclaim and high_wmark stop the reclaim. And we have
> > >
> > > high_wmark < usage < low_wmark.

OK, I see how you calculate those watermarks now but it is really
confusing for those who are used to traditional watermark semantic.
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]