On Thu, Apr 21, 2011 at 2:05 AM, Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
On Thu, Apr 21, 2011 at 5:46 PM, KAMEZAWA Hiroyuki
Sure. Sorry for the confusing.<kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> On Thu, 21 Apr 2011 17:10:23 +0900
> Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
>
>> Hi Kame,
>>
>> On Thu, Apr 21, 2011 at 12:43 PM, KAMEZAWA Hiroyuki
>> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
>> > Ying, please take this just a hint, you don't need to implement this as is.
>> > ==
>> > Now, memcg-kswapd is created per a cgroup. Considering there are users
>> > who creates hundreds on cgroup on a system, it consumes too much
>> > resources, memory, cputime.
>> >
>> > This patch creates a thread pool for memcg-kswapd. All memcg which
>> > needs background recalim are linked to a list and memcg-kswapd
>> > picks up a memcg from the list and run reclaim. This reclaimes
>> > SWAP_CLUSTER_MAX of pages and putback the memcg to the lail of
>> > list. memcg-kswapd will visit memcgs in round-robin manner and
>> > reduce usages.
>> >
>>
>> I didn't look at code yet but as I just look over the description, I
>> have a concern.
>> We have discussed LRU separation between global and memcg.
>
> Please discuss global LRU in other thread. memcg-kswapd is not related
> to global LRU _at all_.
>
> And this patch set is independent from the things we discussed at LSF.
>
>
>> The clear goal is that how to keep _fairness_.
>>
>> For example,
>>
>> memcg-1 : # pages of LRU : 64
>> memcg-2 : # pages of LRU : 128
>> memcg-3 : # pages of LRU : 256
>>
>> If we have to reclaim 96 pages, memcg-1 would be lost half of pages.
>> It's much greater than others so memcg 1's page LRU rotation cycle
>> would be very fast, then working set pages in memcg-1 don't have a
>> chance to promote.
>> Is it fair?
>>
>> I think we should consider memcg-LRU size as doing round-robin.
>>
>
> This set doesn't implement a feature to handle your example case, at all.
I don't mean global LRU but it a fairness although this series is
based on per-memcg targeting.
I should have seen the patch [2/3] before posting the comment.
>
> This patch set handles
>
> memcg-1: # pages of over watermark : 64
> memcg-2: # pages of over watermark : 128
> memcg-3: # pages of over watermark : 256
>
> And finally reclaim all pages over watermarks which user requested.
> Considering fairness, what we consider is in what order we reclaim
> memory memcg-1, memcg-2, memcg-3 and how to avoid unnecessary cpu
> hogging at reclaiming memory all (64+128+256)
>
> This thread pool reclaim 32 pages per iteration with patch-1 and visit all
> in round-robin.
> With patch-2, reclaim 32*weight pages per iteration on each memcg.
>
Maybe you seem consider my concern.
Okay. I will look the idea.
For any ideas on global kswapd and soft_limit reclaim based on round-robin ( discussed in LSF), please move the discussion to :
[RFC no patch yet] memcg: revisit soft_limit reclaim on contention:
http://permalink.gmane.org/gmane.linux.kernel.mm/60966"
I already started with the patch and hopefully to post some result soon.
--Ying
>
> Thanks,
> -Kame
>
>
--
Kind regards,
Minchan Kim