Re: memory.force_empty is deprecated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you.
Do you mean memory.force_empty won't be deprecated and removed?

Regards,
--Zhaohui



From:        Balbir Singh <bsingharora@xxxxxxxxx>
To:        Johannes Weiner <hannes@xxxxxxxxxxx>, Zhao Hui Ding/China/IBM@IBMCN
Cc:        Tejun Heo <tj@xxxxxxxxxx>, cgroups@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx
Date:        2016-11-17 下午 06:39
Subject:        Re: memory.force_empty is deprecated






On 05/11/16 02:21, Johannes Weiner wrote:
> Hi,
>
> On Fri, Nov 04, 2016 at 04:24:25PM +0800, Zhao Hui Ding wrote:
>> Hello,
>>
>> I'm Zhaohui from IBM Spectrum LSF development team. I got below message
>> when running LSF on SUSE11.4, so I would like to share our use scenario
>> and ask for the suggestions without using memory.force_empty.
>>
>> memory.force_empty is deprecated and will be removed. Let us know if it is
>> needed in your usecase at linux-mm@xxxxxxxxx
>>
>> LSF is a batch workload scheduler, it uses cgroup to do batch jobs
>> resource enforcement and accounting. For each job, LSF creates a cgroup
>> directory and put job's PIDs to the cgroup.
>>
>> When we implement LSF cgroup integration, we found creating a new cgroup
>> is much slower than renaming an existing cgroup, it's about hundreds of
>> milliseconds vs less than 10 milliseconds.
>

We added force_empty a long time back so that we could force delete
cgroups. There was no definitive way of removing references to the cgroup
from page_cgroup otherwise.

> Cgroup creation/deletion is not expected to be an ultra-hot path, but
> I'm surprised it takes longer than actually reclaiming leftover pages.
>
> By the time the jobs conclude, how much is usually left in the group?
>
> That said, is it even necessary to pro-actively remove the leftover
> cache from the group before starting the next job? Why not leave it
> for the next job to reclaim it lazily should memory pressure arise?
> It's easy to reclaim page cache, and the first to go as it's behind
> the next job's memory on the LRU list.

It might actually make sense to migrate all tasks out and check what
the left overs look like -- should be easy to reclaim. Also be mindful
if you are using v1 and have use_hierarchy set.

Balbir Singh.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]