On Tue, 22 Sep 2015 17:42:13 -0700 Greg Thelen <gthelen@xxxxxxxxxx> wrote: > Andrew Morton wrote: > > > On Tue, 22 Sep 2015 15:16:32 -0700 Greg Thelen <gthelen@xxxxxxxxxx> wrote: > > > >> mem_cgroup_read_stat() returns a page count by summing per cpu page > >> counters. The summing is racy wrt. updates, so a transient negative sum > >> is possible. Callers don't want negative values: > >> - mem_cgroup_wb_stats() doesn't want negative nr_dirty or nr_writeback. > >> - oom reports and memory.stat shouldn't show confusing negative usage. > >> - tree_usage() already avoids negatives. > >> > >> Avoid returning negative page counts from mem_cgroup_read_stat() and > >> convert it to unsigned. > > > > Someone please remind me why this code doesn't use the existing > > percpu_counter library which solved this problem years ago. > > > >> for_each_possible_cpu(cpu) > > > > and which doesn't iterate across offlined CPUs. > > I found [1] and [2] discussing memory layout differences between: > a) existing memcg hand rolled per cpu arrays of counters > vs > b) array of generic percpu_counter > The current approach was claimed to have lower memory overhead and > better cache behavior. > > I assume it's pretty straightforward to create generic > percpu_counter_array routines which memcg could use. Possibly something > like this could be made general enough could be created to satisfy > vmstat, but less clear. > > [1] http://www.spinics.net/lists/cgroups/msg06216.html > [2] https://lkml.org/lkml/2014/9/11/1057 That all sounds rather bogus to me. __percpu_counter_add() doesn't modify struct percpu_counter at all except for when the cpu-local counter overflows the configured batch size. And for the memcg application I suspect we can set the batch size to INT_MAX... -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>