On Mon, Apr 18, 2011 at 11:42 AM, Michal Hocko <mhocko@xxxxxxx> wrote:
On Mon 18-04-11 10:01:20, Ying Han wrote:[...]
> On Mon, Apr 18, 2011 at 2:13 AM, Michal Hocko <mhocko@xxxxxxx> wrote:
> > I see. I am just concerned whether 3rd level of reclaim is a good idea.OK, I see.
> > We would need to do background reclaim anyway (and to preserve the
> > original semantic it has to be somehow watermark controlled). I am just
> > wondering why we have to implement it separately from kswapd. Cannot we
> > just simply trigger global kswapd which would reclaim all cgroups that
> > are under watermarks? [I am sorry for my ignorance if that is what is
> > implemented in the series - I haven't got to the patches yes]
> >
>
> They are different on per-zone reclaim vs per-memcg reclaim. The first
> one is triggered if the zone is under memory pressure and we need
> to free pages to serve further page allocations. The second one is
> triggered if the memcg is under memory pressure and we need to free
> pages to leave room (limit - usage) for the memcg to grow.
I am still wondering, isn't this just a nice to have feature rather than must to have in order to get rid of the global LRU?
>
> Both of them are needed and that is how it is implemented on the direct
> reclaim path. The kswapd batches only try to
> smooth out the system and memcg performance by reclaiming pages proactively.
> It doesn't affecting the functionality.
The per-memcg kswapd is a must-have, and it is less related to the effort of "get rid of global LRU" than the next patch I am looking at "enhance the soft_limit reclaim". So this is the structure we will end up with
background reclaim:
1. per-memcg : this patch
2. global: targeting reclaim by replacing the per-zone to soft_limit reclaim
direct reclaim:
1. per-memcg: no change from today
2. global: targeting reclaim by replacing the per-zone to soft_limit reclaim.
Doesn't it make transition more complicated. I have noticed many if-else in kswapd path to
distinguish per-cgroup from the traditional global background reclaim.
[...]
This was just as FYI. Watermarks were considered internal thing. So I
> > > > > Step1: Create a cgroup with 500M memory_limit.
> > > > > $ mkdir /dev/cgroup/memory/A
> > > > > $ echo 500m >/dev/cgroup/memory/A/memory.limit_in_bytes
> > > > > $ echo $$ >/dev/cgroup/memory/A/tasks
> > > > >
> > > > > Step2: Test and set the wmarks.
> > > > > $ cat /dev/cgroup/memory/A/memory.low_wmark_distance
> > > > > 0
> > > > > $ cat /dev/cgroup/memory/A/memory.high_wmark_distance
> > > > > 0
> > > >
> > > >
> > > They are used to tune the high/low_marks based on the hard_limit. We
> > might
> > > need to export that configuration to user admin especially on machines
> > where
> > > they over-commit by hard_limit.
> >
> > I remember there was some resistance against tuning watermarks
> > separately.
> >
>
> This API is based on KAMEZAWA's request. :)
wouldn't be surprised if this got somehow controversial.
We went back and forth on how to set the high/low wmarks for different configurations (over-commit or not). So far, by
giving the user ability to set the wmarks seems the most feasible way of fullfilling the requriment.
OK, I see how you calculate those watermarks now but it is really
>
> >
> > > > > $ cat /dev/cgroup/memory/A/memory.reclaim_wmarks
> > > > > low_wmark 524288000
> > > > > high_wmark 524288000
> > > > >
> > > > > $ echo 50m >/dev/cgroup/memory/A/memory.high_wmark_distance
> > > > > $ echo 40m >/dev/cgroup/memory/A/memory.low_wmark_distance
> > > > >
> > > > > $ cat /dev/cgroup/memory/A/memory.reclaim_wmarks
> > > > > low_wmark 482344960
> > > > > high_wmark 471859200
> > > >
> > > > low_wmark is higher than high_wmark?
> > > >
> > >
> > > hah, it is confusing. I have them documented. Basically, low_wmark
> > > triggers reclaim and high_wmark stop the reclaim. And we have
> > >
> > > high_wmark < usage < low_wmark.
confusing for those who are used to traditional watermark semantic.
that is true. I adopt the initial comment from Mel where we keep the same logic of triggering and stopping kswapd with low/high_wmarks and also comparing the usage_in_bytes to the wmarks. Either way is confusing and guess we just need to document it well.
--Ying
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic