Re: [PATCH v2] mm/vmscan: get number of pages on the LRU list in memcgroup base on lru_zone_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 10/9/19 10:16 PM, Michal Hocko wrote:
On Tue 08-10-19 17:34:03, Honglei Wang wrote:
How about we describe it like this:

Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not
updated via batching can help any related code path get more precise lru
size in mem_cgroup case. This makes memory reclaim code won't ignore small
blocks of memory(say, less than MEMCG_CHARGE_BATCH pages) in the lru list.

I am sorry but this doesn't really explain the problem nor justify the
patch.
Let's have a look at where we are at first. lruvec_lru_size provides an
estimate of the number of pages on the given lru that qualifies for the
given zone index. Note the estimate part because that is an optimization
for the updaters path which tend to be really hot. Here we are
consistent between the global and memcg cases.

Now we can have a look at differences between the two cases. The global
LRU case relies on periodic syncing from a kworker context. This has no
guarantee on the timing and as such we cannot really rely on it to be
precise. Memcg path batches updates to MEMCG_CHARGE_BATCH (32) pages
and propages the value up the hierarchy. There is no periodic sync up so
the unsynced case might stay for ever if there are no new accounting events
happening.

Now, does it really matter? 32 pages should be really negligible to
normal workloads (read to those where MEMCG_CHARGE_BATCH << limits).
So we can talk whether other usecases are really sensible. Do we really
want to support memcgs with hard limit set to 10 pages? I would say I am
not really convinced because I have hard time to see real application
other than some artificial testing. On the other hand there is really
non trivial effort to make such usecases to work - just consider all
potential caching/batching that we do for performance reasons.


Thanks for the detailed explanation, Michal. Yes, I didn't care about such kind of testing until QA guys got me and said the ltp testcase don't work as expect and same test passed in older kernel. I recognize there are some users whose job is doing functional verification on Linux. It might make them confused that same test case fail on latest kernel. And they don't know kernel internal such as the details of batch accounting. They just want to use several pages memory to verify the usage of memory feature and there is no 32 pages limitation mentioned in any documentations...

I explain stuff of batch accounting and MEMCG_CHARGE_BATCH to QA mate and clarify it's not a kernel bug. But on the other hand, the question is, is it necessary for us to cater to these users?

That being said, making lruvec_lru_size more precise doesn't sound like
a bad idea in general. But it comes with an additional cost which
shouldn't really matter much with the current code because it shouldn't
be used from hot paths. But is this really the case? Have you done all
the audit? Is this going to stay that way? These are important questions
to answer in the changelog to justify the change properly.

I hope this makes more sense now.


Yes, I'll think more about these questions.

Thanks,
Honglei




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux