Re: [PATCH v3 3/5] mm/memcg: Protect per-CPU counter by disabling preemption on PREEMPT_RT where needed.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-02-21 14:18:25 [+0100], Michal Koutný wrote:
> On Mon, Feb 21, 2022 at 12:31:17PM +0100, Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> wrote:
> > What about memcg_rstat_updated()? It does:
> > 
> > |         x = __this_cpu_add_return(stats_updates, abs(val));
> > |         if (x > MEMCG_CHARGE_BATCH) {
> > |                 atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold);
> > |                 __this_cpu_write(stats_updates, 0);
> > |         }
> > 
> > The writes to stats_updates can happen from IRQ-context and with
> > disabled preemption only. So this is not good, right?
> 
> These counters serve as a hint for aggregating per-cpu per-cgroup stats.
> If they were systematically mis-updated, it could manifest by
> missing "refresh signal" from the given CPU. OTOH, this lagging is also
> meant to by limited by elapsed time thanks to periodic flushing.
> 
> This could affect freshness of the stats not their accuracy though.

Oki. Then let me update the code as suggested and ignore this case since
it is nothing to worry about.

> HTH,
> Michal

Sebastian





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux