On Fri, Apr 12, 2019 at 1:10 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > > On Fri, Apr 12, 2019 at 12:55:10PM -0700, Shakeel Butt wrote: > > We also faced this exact same issue as well and had the similar solution. > > > > > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> > > > > Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx> > > Thanks for the review! > > > (Unrelated to this patchset) I think there should also a way to get > > the exact memcg stats. As the machines are getting bigger (more cpus > > and larger basic page size) the accuracy of stats are getting worse. > > Internally we have an additional interface memory.stat_exact for that. > > However I am not sure in the upstream kernel will an additional > > interface is better or something like /proc/sys/vm/stat_refresh which > > sync all per-cpu stats. > > I was talking to Roman about this earlier as well and he mentioned it > would be nice to have periodic flushing of the per-cpu caches. The > global vmstat has something similar. We might be able to hook into > those workers, but it would likely require some smarts so we don't > walk the entire cgroup tree every couple of seconds. > > We haven't had any actual problems with the per-cpu fuzziness, mainly > because the cgroups of interest also grow in size as the machines get > bigger, and so the relative error doesn't increase. > Yes, this is very machine size dependent. We see this issue more often on larger machines. > Are your requirements that the error dissipates over time (waiting for > a threshold convergence somewhere?) or do you have automation that > gets decisions wrong due to the error at any given point in time? Not sure about the first one but we do have the second case. The node controller does make decisions in an online way based on the stats. Also we do periodically collect and store stats for all jobs across the fleet. This data is processed (offline) and is used in a lot of ways. The inaccuracy in the stats do affect all that analysis particularly for small jobs.