On Mon, Oct 4, 2021 at 10:00 AM Tejun Heo <tj@xxxxxxxxxx> wrote: > > Hello, Shakeel. > > On Wed, Sep 29, 2021 at 04:59:36PM -0700, Shakeel Butt wrote: > > Currently cgroup_rstat_updated() has a speculative already-on-list test > > to check if the given cgroup is already part of the rstat update tree. > > This helps in reducing the contention on the rstat cpu lock. This patch > > adds the similar speculative not-on-list test on the rstat flush > > codepath. > > > > Recently the commit aa48e47e3906 ("memcg: infrastructure to flush memcg > > stats") added periodic rstat flush. On a large system which is not much > > busy, most of the per-cpu rstat tree would be empty. So, the speculative > > not-on-list test helps in eliminating unnecessary work and potentially > > reducing contention on the rstat cpu lock. Please note this might > > introduce temporary inaccuracy but with the frequent and periodic flush > > this would not be an issue. > > > > To evaluate the impact of this patch, an 8 GiB tmpfs file is created on > > a system with swap-on-zram and the file was pushed to swap through > > memory.force_empty interface. On reading the whole file, the memcg stat > > flush in the refault code path is triggered. With this patch, we > > observed 38% reduction in the read time of 8 GiB file. > > The patch looks fine to me but that's a lot of reduction in read time. Can > you elaborate a bit on why this makes such a huge difference? Who's hitting > on that lock so hard? > It was actually due to machine size. I ran a single threaded workload without any interference on a 112 cpus machine. So, most of the time the flush was acquiring and releasing the per-cpu rstat lock for empty trees. Shakeel