On Mon 04-01-21 21:34:45, Feng Tang wrote: > Hi Michal, > > On Mon, Jan 04, 2021 at 02:03:57PM +0100, Michal Hocko wrote: > > On Tue 29-12-20 22:35:13, Feng Tang wrote: > > > When checking a memory cgroup related performance regression [1], > > > from the perf c2c profiling data, we found high false sharing for > > > accessing 'usage' and 'parent'. > > > > > > On 64 bit system, the 'usage' and 'parent' are close to each other, > > > and easy to be in one cacheline (for cacheline size == 64+ B). 'usage' > > > is usally written, while 'parent' is usually read as the cgroup's > > > hierarchical counting nature. > > > > > > So move the 'parent' to the end of the structure to make sure they > > > are in different cache lines. > > > > Yes, parent is write-once field so having it away from other heavy RW > > fields makes sense to me. > > > > > Following are some performance data with the patch, against > > > v5.11-rc1, on several generations of Xeon platforms. Most of the > > > results are improvements, with only one malloc case on one platform > > > shows a -4.0% regression. Each category below has several subcases > > > run on different platform, and only the worst and best scores are > > > listed: > > > > > > fio: +1.8% ~ +8.3% > > > will-it-scale/malloc1: -4.0% ~ +8.9% > > > will-it-scale/page_fault1: no change > > > will-it-scale/page_fault2: +2.4% ~ +20.2% > > > > What is the second number? Std? > > For each case like 'page_fault2', I run several subcases on different > generations of Xeon, and only listed the lowest (first number) and > highest (second number) scores. > > There are 5 runs and the result are: +3.6%, +2.4%, +10.4%, +20.2%, > +4.7%, and +2.4% and +20.2% are listed. This should be really explained in the changelog and ideally mention the model as well. Seeing a std would be appreciated as well. -- Michal Hocko SUSE Labs