On Tue, Mar 28, 2023 at 7:15 AM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > On Tue, Mar 28, 2023 at 06:16:34AM +0000, Yosry Ahmed wrote: > [...] > > @@ -585,8 +585,8 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) > > */ > > static void flush_memcg_stats_dwork(struct work_struct *w); > > static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); > > -static DEFINE_SPINLOCK(stats_flush_lock); > > static DEFINE_PER_CPU(unsigned int, stats_updates); > > +static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); > > static atomic_t stats_flush_threshold = ATOMIC_INIT(0); > > static u64 flush_next_time; > > > > @@ -636,15 +636,18 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > > > > static void __mem_cgroup_flush_stats(void) > > { > > - unsigned long flag; > > - > > - if (!spin_trylock_irqsave(&stats_flush_lock, flag)) > > + /* > > + * We always flush the entire tree, so concurrent flushers can just > > + * skip. This avoids a thundering herd problem on the rstat global lock > > + * from memcg flushers (e.g. reclaim, refault, etc). > > + */ > > + if (atomic_xchg(&stats_flush_ongoing, 1)) > > Have you profiled this? I wonder if we should replace the above with > > if (atomic_read(&stats_flush_ongoing) || atomic_xchg(&stats_flush_ongoing, 1)) I profiled the entire series with perf and I haven't noticed a notable difference between before and after the patch series -- but maybe some specific access patterns cause a regression, not sure. Does an atomic_cmpxchg() satisfy the same purpose? it's easier to read / more concise I guess. Something like if (atomic_cmpxchg(&stats_flush_ongoing, 0, 1)) WDYT? > > to not always dirty the cacheline. This would not be an issue if there > is no cacheline sharing but I suspect percpu stats_updates is sharing > the cacheline with it and may cause false sharing with the parallel stat > updaters (updaters only need to read the base percpu pointer). > > Other than that the patch looks good.