On Mon, Feb 21, 2022 at 09:38:22PM +0800, Aaron Lu wrote: > On Fri, Feb 18, 2022 at 12:20:03PM +0800, Aaron Lu wrote: > > On Thu, Feb 17, 2022 at 09:31:13AM +0000, Mel Gorman wrote: > > > On Thu, Feb 17, 2022 at 09:53:08AM +0800, Aaron Lu wrote: > > > > > 2-socket CascadeLake (40 cores, 80 CPUs HT enabled) > > > > > 5.17.0-rc3 5.17.0-rc3 > > > > > vanilla mm-highpcpopt-v2 > > > > > Hmean page_fault1-processes-2 2694662.26 ( 0.00%) 2695780.35 ( 0.04%) > > > > > Hmean page_fault1-processes-5 6425819.34 ( 0.00%) 6435544.57 * 0.15%* > > > > > Hmean page_fault1-processes-8 9642169.10 ( 0.00%) 9658962.39 ( 0.17%) > > > > > Hmean page_fault1-processes-12 12167502.10 ( 0.00%) 12190163.79 ( 0.19%) > > > > > Hmean page_fault1-processes-21 15636859.03 ( 0.00%) 15612447.26 ( -0.16%) > > > > > Hmean page_fault1-processes-30 25157348.61 ( 0.00%) 25169456.65 ( 0.05%) > > > > > Hmean page_fault1-processes-48 27694013.85 ( 0.00%) 27671111.46 ( -0.08%) > > > > > Hmean page_fault1-processes-79 25928742.64 ( 0.00%) 25934202.02 ( 0.02%) <-- > > > > > Hmean page_fault1-processes-110 25730869.75 ( 0.00%) 25671880.65 * -0.23%* > > > > > Hmean page_fault1-processes-141 25626992.42 ( 0.00%) 25629551.61 ( 0.01%) > > > > > Hmean page_fault1-processes-172 25611651.35 ( 0.00%) 25614927.99 ( 0.01%) > > > > > Hmean page_fault1-processes-203 25577298.75 ( 0.00%) 25583445.59 ( 0.02%) > > > > > Hmean page_fault1-processes-234 25580686.07 ( 0.00%) 25608240.71 ( 0.11%) > > > > > Hmean page_fault1-processes-265 25570215.47 ( 0.00%) 25568647.58 ( -0.01%) > > > > > Hmean page_fault1-processes-296 25549488.62 ( 0.00%) 25543935.00 ( -0.02%) > > > > > Hmean page_fault1-processes-320 25555149.05 ( 0.00%) 25575696.74 ( 0.08%) > > > > > > > > > > The differences are mostly within the noise and the difference close to > > > > > $nr_cpus is negligible. > > > > > > > > I have queued will-it-scale/page_fault1/processes/$nr_cpu on 2 4-sockets > > > > servers: CascadeLake and CooperLaker and will let you know the result > > > > once it's out. > > > > > > > > > > Thanks, 4 sockets and a later generation would be nice to cover. > > > > > > > I'm using 'https://github.com/hnaz/linux-mm master' and doing the > > > > comparison with commit c000d687ce22("mm/page_alloc: simplify how many > > > > pages are selected per pcp list during bulk free") and commit 8391e0a7e172 > > > > ("mm/page_alloc: free pages in a single pass during bulk free") there. > > > > > > > > > > The baseline looks fine. It's different to what I used but the page_alloc > > > shouldn't have much impact. > > > > > > When looking at will-it-scale, please pay attention to lower CPU counts > > > as well and take account changes in standard deviation. Looking at the > > > > I'll also test nr_task=4/16/64 on the 4sockets CooperLake(nr_cpu=144) then. > > > > For the record, these tests don't show any visible performance changes > on CooperLake. One thing I just noticed is that, zone lock contention increased to some extent. I'm not sure if this is worrisome so I suppose I should at least mention it here. The nr_task=100% test on the 4 sockets Cooper Lake showed that zone lock contention increased from 13.56% to 20.16% and for nr_task=16, it increased from 4.75% to 6.18%. The reason is probably due to more code are now inside the lock and when there is contention, it will make things worse. I'm aware of that nr_task=100% is a rare case and this patchset is meant to improve things when there is very little contention, which should be the common case. So I guess that's just the tradeoff we have to make... Here are the results on performance metric and zone lock metrics: nr_task=100% ========================================================================================= tbox_group/testcase/rootfs/kconfig/compiler/nr_task/mode/test/thp_enabled/cpufreq_governor: lkp-cpl-4sp1/will-it-scale/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/100%/process/page_fault1/never/performance commit/ucode: 8391e0a7e1728d74faecebf096b446ac5d0a5709/0x7002302 (mm/page_alloc: free pages in a single pass during bulk free) c000d687ce22252c8ea96e47b4a2add592fbad6c/0x7002302 (mm/page_alloc: simplify how many pages are selected per pcp list during bulk free) 7decb609034044e56cffd1c9971738878467ee96/0x7002402 (mm/page_alloc: Do not prefetch buddies during bulk free) 8391e0a7e1728d74 c000d687ce22252c8ea96e47b4a 7decb609034044e56cffd1c9971 ---------------- --------------------------- --------------------------- %stddev %change %stddev %change %stddev \ | \ | \ 11807831 -0.5% 11750578 -0.3% 11778047 will-it-scale.144.processes 15.44 ± 10% -4.9 10.58 ± 8% +0.6 16.01 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.rmqueue_bulk.get_page_from_freelist.__alloc_pages 4.72 ± 8% -1.7 2.98 -0.1 4.63 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages nr_task=16 ========================================================================================= tbox_group/testcase/rootfs/kconfig/compiler/nr_task/mode/test/thp_enabled/cpufreq_governor/ucode: lkp-cpl-4sp1/will-it-scale/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/16/process/page_fault1/never/performance/0x7002402 commit: 8391e0a7e1728d74faecebf096b446ac5d0a5709 (mm/page_alloc: free pages in a single pass during bulk free) c000d687ce22252c8ea96e47b4a2add592fbad6c (mm/page_alloc: simplify how many pages are selected per pcp list during bulk free) 7decb609034044e56cffd1c9971738878467ee96 (mm/page_alloc: Do not prefetch buddies during bulk free) 8391e0a7e1728d74 c000d687ce22252c8ea96e47b4a 7decb609034044e56cffd1c9971 ---------------- --------------------------- --------------------------- %stddev %change %stddev %change %stddev \ | \ | \ 3410615 +0.2% 3416565 +0.2% 3415846 will-it-scale.16.processes 4.83 ± 3% -1.1 3.76 ± 9% -0.4 4.40 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.rmqueue_bulk.get_page_from_freelist.__alloc_pages 1.35 ± 9% -0.4 0.99 ± 14% -0.2 1.17 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages Regards, Aaron