Re: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Hyeonggon,

On Thu, Jul 20, 2023 at 08:59:56PM +0800, Hyeonggon Yoo wrote:
> On Thu, Jul 20, 2023 at 12:01 PM Oliver Sang <oliver.sang@xxxxxxxxx> wrote:
> >
> > hi, Hyeonggon Yoo,
> >
> > On Tue, Jul 18, 2023 at 03:43:16PM +0900, Hyeonggon Yoo wrote:
> > > On Mon, Jul 17, 2023 at 10:41 PM kernel test robot
> > > <oliver.sang@xxxxxxxxx> wrote:
> > > >
> > > >
> > > >
> > > > Hello,
> > > >
> > > > kernel test robot noticed a -12.5% regression of hackbench.throughput on:
> > > >
> > > >
> > > > commit: a0fd217e6d6fbd23e91f8796787b621e7d576088 ("[PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage")
> > > > url: https://github.com/intel-lab-lkp/linux/commits/Jay-Patel/mm-slub-Optimize-slub-memory-usage/20230628-180050
> > > > base: git://git.kernel.org/cgit/linux/kernel/git/vbabka/slab.git for-next
> > > > patch link: https://lore.kernel.org/all/20230628095740.589893-1-jaypatel@xxxxxxxxxxxxx/
> > > > patch subject: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage
> > > >
> > > > testcase: hackbench
> > > > test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
> > > > parameters:
> > > >
> > > >         nr_threads: 100%
> > > >         iterations: 4
> > > >         mode: process
> > > >         ipc: socket
> > > >         cpufreq_governor: performance
> > > >
> > > >
> > > >
> > > >
> > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > > > the same patch/commit), kindly add following tags
> > > > | Reported-by: kernel test robot <oliver.sang@xxxxxxxxx>
> > > > | Closes: https://lore.kernel.org/oe-lkp/202307172140.3b34825a-oliver.sang@xxxxxxxxx
> > > >
> > > >
> > > > Details are as below:
> > > > -------------------------------------------------------------------------------------------------->
> > > >
> > > >
> > > > To reproduce:
> > > >
> > > >         git clone https://github.com/intel/lkp-tests.git
> > > >         cd lkp-tests
> > > >         sudo bin/lkp install job.yaml           # job file is attached in this email
> > > >         bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
> > > >         sudo bin/lkp run generated-yaml-file
> > > >
> > > >         # if come across any failure that blocks the test,
> > > >         # please remove ~/.lkp and /lkp dir to run from a clean state.
> > > >
> > > > =========================================================================================
> > > > compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
> > > >   gcc-12/performance/socket/4/x86_64-rhel-8.3/process/100%/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp2/hackbench
> > > >
> > > > commit:
> > > >   7bc162d5cc ("Merge branches 'slab/for-6.5/prandom', 'slab/for-6.5/slab_no_merge' and 'slab/for-6.5/slab-deprecate' into slab/for-next")
> > > >   a0fd217e6d ("mm/slub: Optimize slub memory usage")
> > > >
> > > > 7bc162d5cc4de5c3 a0fd217e6d6fbd23e91f8796787
> > > > ---------------- ---------------------------
> > > >          %stddev     %change         %stddev
> > > >              \          |                \
> > > >     222503 ą 86%    +108.7%     464342 ą 58%  numa-meminfo.node1.Active
> > > >     222459 ą 86%    +108.7%     464294 ą 58%  numa-meminfo.node1.Active(anon)
> > > >      55573 ą 85%    +108.0%     115619 ą 58%  numa-vmstat.node1.nr_active_anon
> > > >      55573 ą 85%    +108.0%     115618 ą 58%  numa-vmstat.node1.nr_zone_active_anon
> > >
> > > I'm quite baffled while reading this.
> > > How did changing slab order calculation double the number of active anon pages?
> > > I doubt two experiments were performed on the same settings.
> >
> > let me introduce our test process.
> >
> > we make sure the tests upon commit and its parent have exact same environment
> > except the kernel difference, and we also make sure the config to build the
> > commit and its parent are identical.
> >
> > we run tests for one commit at least 6 times to make sure the data is stable.
> >
> > such like for this case, we rebuild the commit and its parent's kernel, the
> > config is attached FYI.
> 
> Hello Oliver,
> 
> Thank you for confirming the testing environment is totally fine.
> and I'm sorry. I didn't mean to offend that your tests were bad.
> 
> It was more like  "oh, the data totally doesn't make sense to me"
> and I blamed the tests rather than my poor understanding of the data ;)
> 
> Anyway,
> as the data shows a repeatable regression,
> let's think more about the possible scenario:
> 
> I can't stop thinking that the patch must've affected the system's
> reclamation behavior in some way.
> (I think more active anon pages with a similar number total of anon
> pages implies the kernel scanned more pages)
> 
> It might be because kswapd was more frequently woken up (possible if
> skbs were allocated with GFP_ATOMIC)
> But the data provided is not enough to support this argument.
> 
> >  2.43 ± 7% +4.5 6.90 ± 11% perf-profile.children.cycles-pp.get_partial_node
> >  3.23 ±  5%      +4.5        7.77 ±  9%  perf-profile.children.cycles-pp.___slab_alloc
> >  7.51 ±  2%      +4.6       12.11 ±  5%  perf-profile.children.cycles-pp.kmalloc_reserve
> > 6.94 ±  2%      +4.7       11.62 ±  6%  perf-profile.children.cycles-pp.__kmalloc_node_track_caller
> > 6.46 ±  2%      +4.8       11.22 ±  6%  perf-profile.children.cycles-pp.__kmem_cache_alloc_node
> >  8.48 ±  4%      +7.9       16.42 ±  8%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> >  6.12 ±  6%      +8.6       14.74 ±  9%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
> 
> And this increased cycles in the SLUB slowpath implies that the actual
> number of objects available in
> the per cpu partial list has been decreased, possibly because of
> inaccuracy in the heuristic?
> (cuz the assumption that slabs cached per are half-filled, and that
> slabs' order is s->oo)

>From the patch:

 static unsigned int slub_max_order =
-	IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER;
+	IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : 2;

Could this be related? that it reduces the order for some slab cache,
so each per-cpu slab will has less objects, which makes the contention
for per-node spinlock 'list_lock' more severe when the slab allocation
is under pressure from many concurrent threads.

I don't have direct data to backup it, and I can try some experiment.

Thanks,
Feng

> Any thoughts, Vlastimil or Jay?
> 
> >
> > then retest on this test machine:
> > 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux