Hi Shakeel, On Tue, Jan 05, 2021 at 04:47:33PM -0800, Shakeel Butt wrote: > On Tue, Dec 29, 2020 at 6:35 AM Feng Tang <feng.tang@xxxxxxxxx> wrote: > > > > When profiling memory cgroup involved benchmarking, status update > > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH > > is used for both charging and statistics/events updating, and is > > set to 32, which may be good for accuracy of memcg charging, but > > too small for stats update which causes concurrent access to global > > stats data instead of per-cpu ones. > > > > So handle them differently, by adding a new bigger batch number > > for stats updating, while keeping the value for charging (though > > comments in memcontrol.h suggests to consider a bigger value too) > > > > The new batch is set to 512, which considers 2MB huge pages (512 > > pages), as the check logic mostly is: > > > > if (x > BATCH), then skip updating global data > > > > so it will save 50% global data updating for 2MB pages > > > > Following are some performance data with the patch, against > > v5.11-rc1, on several generations of Xeon platforms. Each category > > below has several subcases run on different platform, and only the > > worst and best scores are listed: > > > > fio: +2.0% ~ +6.8% > > will-it-scale/malloc: -0.9% ~ +6.2% > > will-it-scale/page_fault1: no change > > will-it-scale/page_fault2: +13.7% ~ +26.2% > > > > One thought is it could be dynamically calculated according to > > memcg limit and number of CPUs, and another is to add a periodic > > syncing of the data for accuracy reason similar to vmstat, as > > suggested by Ying. > > > > I am going to push back on this change. On a large system where jobs > can run on any available cpu, this will totally mess up the stats > (which is actually what happens on our production servers). These > stats are used for multiple purposes like debugging or understanding > the memory usage of the job or doing data analysis. Thanks for sharing the usage case, and I agree it will bring more trouble for debugging and analyzing. Though we lack real world load, but the micro benchmarks do show obvious benefits, 0day rebot reported a 43.4% improvement for vm-scalability lru-shm case, and it is up to +60% against 5.11-rc1. The memory cgroup stats updating hotspots has been on our radar for a long time, which could be seen in the perf profile data. So I am wondering if we could make the batch a configurable knob, so that it can benefit workload without need for accurate stats. One further thought is, there are quite some "BATCH" number in kernel for perf-cpu/global data updating, maybe we can add a global flag 'sysctl_need_accurate_stats' for if (sysctl_need_accurate_stats) batch = SMALLER_BATCH else batch = BIGGER_BATCH Thanks, Feng