baseline is 6.1.38 Other is 6.1.38 with the patch from https://lore.kernel.org/linux-mm/a44ff1d018998e3330e309ac3ae76575bf09e311.camel@xxxxxxxxxxxxx/T/ the AMD and Intel machine are both dual socket and ARM machine is single. I happen to have those setup to grab SReclaim and SUnreclaim so could run them quickly. Can certain dig into more details though. On Fri, Jul 21, 2023 at 11:40 AM Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> wrote: > > On Fri, Jul 21, 2023 at 11:50 PM Binder Makin <merimus@xxxxxxxxxx> wrote: > > > > Quick run with hackbench and unixbench on large intel, amd, and arm machines > > Patch was applied to 6.1.38 > > > > hackbench > > Intel performance -2.9% - +1.57% SReclaim -3.2% SUnreclaim -2.4% > > Amd performance -28% - +7.58% SReclaim +21.31 SUnreclaim +20.72 > > ARM performance -0.6 - +1.6% SReclaim +24% SUnreclaim +70% > > > > unixbench > > Intel performance -1.4 - +1.59% SReclaimm -1.65% SUnreclaim -1.59% > > Amd performance -1.9% - +1.05% SReclaim -3.1% SUnreclaimm -0.81% > > ARM performance -0.09% - +0.54% SReclaimm -1.05% SUnreclaim -2.03% > > > > AMD Hackbench > > 28% drop on hackbench_thread_pipes_234 > > Hi Binder, > Thank you for measuring!! > > Can you please provide more information? > Baseline is 6.1.38, and the other is the one, or two patches applied > on baseline? > (optimizing slub memory usage v2, and not allocating high order slabs > from remote nodes) > > The 28% drop in AMD is quite huge, and the overall memory usage increased a lot. > > Does the AMD machine have 2 sockets? > Did remote node allocations increase or decrease? `numastat` > > Can you get some profiles indicating increased list_lock contention? > (or change in values provided by `slabinfo skbuff_head_cache` when > with CONFIG_SLUB_STATS built?) > > > On Thu, Jul 20, 2023 at 11:08 AM Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> wrote: > > > > > > On Thu, Jul 20, 2023 at 11:16 PM Feng Tang <feng.tang@xxxxxxxxx> wrote: > > > > > > > > Hi Hyeonggon, > > > > > > > > On Thu, Jul 20, 2023 at 08:59:56PM +0800, Hyeonggon Yoo wrote: > > > > > On Thu, Jul 20, 2023 at 12:01 PM Oliver Sang <oliver.sang@xxxxxxxxx> wrote: > > > > > > > > > > > > hi, Hyeonggon Yoo, > > > > > > > > > > > > On Tue, Jul 18, 2023 at 03:43:16PM +0900, Hyeonggon Yoo wrote: > > > > > > > On Mon, Jul 17, 2023 at 10:41 PM kernel test robot > > > > > > > <oliver.sang@xxxxxxxxx> wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > > kernel test robot noticed a -12.5% regression of hackbench.throughput on: > > > > > > > > > > > > > > > > > > > > > > > > commit: a0fd217e6d6fbd23e91f8796787b621e7d576088 ("[PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage") > > > > > > > > url: https://github.com/intel-lab-lkp/linux/commits/Jay-Patel/mm-slub-Optimize-slub-memory-usage/20230628-180050 > > > > > > > > base: git://git.kernel.org/cgit/linux/kernel/git/vbabka/slab.git for-next > > > > > > > > patch link: https://lore.kernel.org/all/20230628095740.589893-1-jaypatel@xxxxxxxxxxxxx/ > > > > > > > > patch subject: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage > > > > > > > > > > > > > > > > testcase: hackbench > > > > > > > > test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory > > > > > > > > parameters: > > > > > > > > > > > > > > > > nr_threads: 100% > > > > > > > > iterations: 4 > > > > > > > > mode: process > > > > > > > > ipc: socket > > > > > > > > cpufreq_governor: performance > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > > > > > > > > the same patch/commit), kindly add following tags > > > > > > > > | Reported-by: kernel test robot <oliver.sang@xxxxxxxxx> > > > > > > > > | Closes: https://lore.kernel.org/oe-lkp/202307172140.3b34825a-oliver.sang@xxxxxxxxx > > > > > > > > > > > > > > > > > > > > > > > > Details are as below: > > > > > > > > --------------------------------------------------------------------------------------------------> > > > > > > > > > > > > > > > > > > > > > > > > To reproduce: > > > > > > > > > > > > > > > > git clone https://github.com/intel/lkp-tests.git > > > > > > > > cd lkp-tests > > > > > > > > sudo bin/lkp install job.yaml # job file is attached in this email > > > > > > > > bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run > > > > > > > > sudo bin/lkp run generated-yaml-file > > > > > > > > > > > > > > > > # if come across any failure that blocks the test, > > > > > > > > # please remove ~/.lkp and /lkp dir to run from a clean state. > > > > > > > > > > > > > > > > ========================================================================================= > > > > > > > > compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase: > > > > > > > > gcc-12/performance/socket/4/x86_64-rhel-8.3/process/100%/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp2/hackbench > > > > > > > > > > > > > > > > commit: > > > > > > > > 7bc162d5cc ("Merge branches 'slab/for-6.5/prandom', 'slab/for-6.5/slab_no_merge' and 'slab/for-6.5/slab-deprecate' into slab/for-next") > > > > > > > > a0fd217e6d ("mm/slub: Optimize slub memory usage") > > > > > > > > > > > > > > > > 7bc162d5cc4de5c3 a0fd217e6d6fbd23e91f8796787 > > > > > > > > ---------------- --------------------------- > > > > > > > > %stddev %change %stddev > > > > > > > > \ | \ > > > > > > > > 222503 ą 86% +108.7% 464342 ą 58% numa-meminfo.node1.Active > > > > > > > > 222459 ą 86% +108.7% 464294 ą 58% numa-meminfo.node1.Active(anon) > > > > > > > > 55573 ą 85% +108.0% 115619 ą 58% numa-vmstat.node1.nr_active_anon > > > > > > > > 55573 ą 85% +108.0% 115618 ą 58% numa-vmstat.node1.nr_zone_active_anon > > > > > > > > > > > > > > I'm quite baffled while reading this. > > > > > > > How did changing slab order calculation double the number of active anon pages? > > > > > > > I doubt two experiments were performed on the same settings. > > > > > > > > > > > > let me introduce our test process. > > > > > > > > > > > > we make sure the tests upon commit and its parent have exact same environment > > > > > > except the kernel difference, and we also make sure the config to build the > > > > > > commit and its parent are identical. > > > > > > > > > > > > we run tests for one commit at least 6 times to make sure the data is stable. > > > > > > > > > > > > such like for this case, we rebuild the commit and its parent's kernel, the > > > > > > config is attached FYI. > > > > > > > > > > Hello Oliver, > > > > > > > > > > Thank you for confirming the testing environment is totally fine. > > > > > and I'm sorry. I didn't mean to offend that your tests were bad. > > > > > > > > > > It was more like "oh, the data totally doesn't make sense to me" > > > > > and I blamed the tests rather than my poor understanding of the data ;) > > > > > > > > > > Anyway, > > > > > as the data shows a repeatable regression, > > > > > let's think more about the possible scenario: > > > > > > > > > > I can't stop thinking that the patch must've affected the system's > > > > > reclamation behavior in some way. > > > > > (I think more active anon pages with a similar number total of anon > > > > > pages implies the kernel scanned more pages) > > > > > > > > > > It might be because kswapd was more frequently woken up (possible if > > > > > skbs were allocated with GFP_ATOMIC) > > > > > But the data provided is not enough to support this argument. > > > > > > > > > > > 2.43 ± 7% +4.5 6.90 ± 11% perf-profile.children.cycles-pp.get_partial_node > > > > > > 3.23 ± 5% +4.5 7.77 ± 9% perf-profile.children.cycles-pp.___slab_alloc > > > > > > 7.51 ± 2% +4.6 12.11 ± 5% perf-profile.children.cycles-pp.kmalloc_reserve > > > > > > 6.94 ± 2% +4.7 11.62 ± 6% perf-profile.children.cycles-pp.__kmalloc_node_track_caller > > > > > > 6.46 ± 2% +4.8 11.22 ± 6% perf-profile.children.cycles-pp.__kmem_cache_alloc_node > > > > > > 8.48 ± 4% +7.9 16.42 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave > > > > > > 6.12 ± 6% +8.6 14.74 ± 9% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath > > > > > > > > > > And this increased cycles in the SLUB slowpath implies that the actual > > > > > number of objects available in > > > > > the per cpu partial list has been decreased, possibly because of > > > > > inaccuracy in the heuristic? > > > > > (cuz the assumption that slabs cached per are half-filled, and that > > > > > slabs' order is s->oo) > > > > > > > > From the patch: > > > > > > > > static unsigned int slub_max_order = > > > > - IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER; > > > > + IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : 2; > > > > > > > > Could this be related? that it reduces the order for some slab cache, > > > > so each per-cpu slab will has less objects, which makes the contention > > > > for per-node spinlock 'list_lock' more severe when the slab allocation > > > > is under pressure from many concurrent threads. > > > > > > hackbench uses skbuff_head_cache intensively. So we need to check if > > > skbuff_head_cache's > > > order was increased or decreased. On my desktop skbuff_head_cache's > > > order is 1 and I roughly > > > guessed it was increased, (but it's still worth checking in the testing env) > > > > > > But decreased slab order does not necessarily mean decreased number > > > of cached objects per CPU, because when oo_order(s->oo) is smaller, > > > then it caches > > > more slabs into the per cpu slab list. > > > > > > I think more problematic situation is when oo_order(s->oo) is higher, > > > because the heuristic > > > in SLUB assumes that each slab has order of oo_order(s->oo) and it's > > > half-filled. if it allocates > > > slabs with order lower than oo_order(s->oo), the number of cached > > > objects per CPU > > > decreases drastically due to the inaccurate assumption. > > > > > > So yeah, decreased number of cached objects per CPU could be the cause > > > of the regression due to the heuristic. > > > > > > And I have another theory: it allocated high order slabs from remote node > > > even if there are slabs with lower order in the local node. > > > > > > ofc we need further experiment, but I think both improving the > > > accuracy of heuristic and > > > avoiding allocating high order slabs from remote nodes would make SLUB > > > more robust. > > > > > > > I don't have direct data to backup it, and I can try some experiment. > > > > > > Thank you for taking time for experiment! > > > > > > Thanks, > > > Hyeonggon > > > > > > > > > then retest on this test machine: > > > > > > 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory > > >