On Thu, May 5, 2022 at 9:42 PM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > On Thu, May 5, 2022 at 5:13 AM Ganesan Rajagopal <rganesan@xxxxxxxxxx> wrote: > > > > v1 memcg exports memcg->watermark as "memory.mem_usage_in_bytes" in > > *max_usage_in_bytes Oops, thanks for the correction. > > sysfs. This is missing for v2 memcg though "memory.current" is exported. > > There is no other easy way of getting this information in Linux. > > getrsuage() returns ru_maxrss but that's the max RSS of a single process > > instead of the aggregated max RSS of all the processes. Hence, expose > > memcg->watermark as "memory.watermark" for v2 memcg. > > > > Signed-off-by: Ganesan Rajagopal <rganesan@xxxxxxxxxx> > > Can you please explain the use-case for which you need this metric? > Also note that this is not really an aggregated RSS of all the > processes in the cgroup. So, do you want max RSS or max charge and for > what use-case? We run a lot of automated tests when building our software and used to run into OOM scenarios when the tests run unbounded. We use this metric to heuristically limit how many tests can run in parallel using per test historical data. I understand this isn't really aggregated RSS, max charge works. We just need some metric to account for the peak memory usage. We don't need it to be super accurate because there's significant variance between test runs anyway. We conservatively use the historical max to limit parallelism. Since this metric is not exposed in v2 memcg, the only alternative is to poll "memory.current" which would be quite inefficient and grossly inaccurate. Ganesan