Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > On Thu, 7 Mar 2019 08:56:32 -0800 Greg Thelen <gthelen@xxxxxxxxxx> wrote: > >> Since commit a983b5ebee57 ("mm: memcontrol: fix excessive complexity in >> memory.stat reporting") memcg dirty and writeback counters are managed >> as: >> 1) per-memcg per-cpu values in range of [-32..32] >> 2) per-memcg atomic counter >> When a per-cpu counter cannot fit in [-32..32] it's flushed to the >> atomic. Stat readers only check the atomic. >> Thus readers such as balance_dirty_pages() may see a nontrivial error >> margin: 32 pages per cpu. >> Assuming 100 cpus: >> 4k x86 page_size: 13 MiB error per memcg >> 64k ppc page_size: 200 MiB error per memcg >> Considering that dirty+writeback are used together for some decisions >> the errors double. >> >> This inaccuracy can lead to undeserved oom kills. One nasty case is >> when all per-cpu counters hold positive values offsetting an atomic >> negative value (i.e. per_cpu[*]=32, atomic=n_cpu*-32). >> balance_dirty_pages() only consults the atomic and does not consider >> throttling the next n_cpu*32 dirty pages. If the file_lru is in the >> 13..200 MiB range then there's absolutely no dirty throttling, which >> burdens vmscan with only dirty+writeback pages thus resorting to oom >> kill. >> >> It could be argued that tiny containers are not supported, but it's more >> subtle. It's the amount the space available for file lru that matters. >> If a container has memory.max-200MiB of non reclaimable memory, then it >> will also suffer such oom kills on a 100 cpu machine. >> >> ... >> >> Make balance_dirty_pages() and wb_over_bg_thresh() work harder to >> collect exact per memcg counters when a memcg is close to the >> throttling/writeback threshold. This avoids the aforementioned oom >> kills. >> >> This does not affect the overhead of memory.stat, which still reads the >> single atomic counter. >> >> Why not use percpu_counter? memcg already handles cpus going offline, >> so no need for that overhead from percpu_counter. And the >> percpu_counter spinlocks are more heavyweight than is required. >> >> It probably also makes sense to include exact dirty and writeback >> counters in memcg oom reports. But that is saved for later. > > Nice changelog, thanks. > >> Signed-off-by: Greg Thelen <gthelen@xxxxxxxxxx> > > Did you consider cc:stable for this? We may as well - the stablebots > backport everything which might look slightly like a fix anyway :( Good idea. Done in -v2 of the patch. >> --- a/include/linux/memcontrol.h >> +++ b/include/linux/memcontrol.h >> @@ -573,6 +573,22 @@ static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, >> return x; >> } >> >> +/* idx can be of type enum memcg_stat_item or node_stat_item */ >> +static inline unsigned long >> +memcg_exact_page_state(struct mem_cgroup *memcg, int idx) >> +{ >> + long x = atomic_long_read(&memcg->stat[idx]); >> +#ifdef CONFIG_SMP >> + int cpu; >> + >> + for_each_online_cpu(cpu) >> + x += per_cpu_ptr(memcg->stat_cpu, cpu)->count[idx]; >> + if (x < 0) >> + x = 0; >> +#endif >> + return x; >> +} > > This looks awfully heavyweight for an inline function. Why not make it > a regular function and avoid the bloat and i-cache consumption? Done in -v2. > Also, did you instead consider making this spill the percpu counters > into memcg->stat[idx]? That might be more useful for potential future > callers. It would become a little more expensive though. I looked at that approach, but couldn't convince myself it was safe. I kept staring at "Remote [...] Write accesses can cause unique problems due to the relaxed synchronization requirements for this_cpu operations." from this_cpu_ops.txt. So I'd like to delay this possible optimization for a later patch.