On Wed 20-12-17 13:52:14, kemi wrote: > > > On 2017年12月19日 20:40, Michal Hocko wrote: > > On Tue 19-12-17 14:39:24, Kemi Wang wrote: > >> We have seen significant overhead in cache bouncing caused by NUMA counters > >> update in multi-threaded page allocation. See 'commit 1d90ca897cb0 ("mm: > >> update NUMA counter threshold size")' for more details. > >> > >> This patch updates NUMA counters to a fixed size of (MAX_S16 - 2) and deals > >> with global counter update using different threshold size for node page > >> stats. > > > > Again, no numbers. > > Compare to vanilla kernel, I don't think it has performance improvement, so > I didn't post performance data here. > But, if you would like to see performance gain from enlarging threshold size > for NUMA stats (compare to the first patch), I will do that later. Please do. I would also like to hear _why_ all counters cannot simply behave same. In other words why we cannot simply increase stat_threshold? Maybe calculate_normal_threshold needs a better scaling for larger machines. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>