> On Sep 12, 2023, at 11:14 AM, Feng Tang <feng.tang@xxxxxxxxx> wrote: > > Hi Chuck Lever, > > On Tue, Sep 12, 2023 at 09:01:29PM +0800, Chuck Lever III wrote: >> >> >>> On Sep 11, 2023, at 9:25 PM, Oliver Sang <oliver.sang@xxxxxxxxx> wrote: >>> >>> hi, Chuck Lever, >>> >>> On Fri, Sep 08, 2023 at 02:43:22PM +0000, Chuck Lever III wrote: >>>> >>>> >>>>> On Sep 8, 2023, at 1:26 AM, kernel test robot <oliver.sang@xxxxxxxxx> wrote: >>>>> >>>>> >>>>> >>>>> Hello, >>>>> >>>>> kernel test robot noticed a -19.0% regression of aim9.disk_src.ops_per_sec on: >>>>> >>>>> >>>>> commit: a2e459555c5f9da3e619b7e47a63f98574dc75f1 ("shmem: stable directory offsets") >>>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master >>>>> >>>>> testcase: aim9 >>>>> test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 112G memory >>>>> parameters: >>>>> >>>>> testtime: 300s >>>>> test: disk_src >>>>> cpufreq_governor: performance >>>>> >>>>> >>>>> In addition to that, the commit also has significant impact on the following tests: >>>>> >>>>> +------------------+-------------------------------------------------------------------------------------------------+ >>>>> | testcase: change | aim9: aim9.disk_src.ops_per_sec -14.6% regression | >>>>> | test machine | 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 112G memory | >>>>> | test parameters | cpufreq_governor=performance | >>>>> | | test=all | >>>>> | | testtime=5s | >>>>> +------------------+-------------------------------------------------------------------------------------------------+ >>>>> >>>>> >>>>> If you fix the issue in a separate patch/commit (i.e. not just a new version of >>>>> the same patch/commit), kindly add following tags >>>>> | Reported-by: kernel test robot <oliver.sang@xxxxxxxxx> >>>>> | Closes: https://lore.kernel.org/oe-lkp/202309081306.3ecb3734-oliver.sang@xxxxxxxxx > >>>> But, I'm still in a position where I can't run this test, >>>> and the results don't really indicate where the problem >>>> is. So I can't possibly address this issue. >>>> >>>> Any suggestions, advice, or help would be appreciated. >>> >>> if you have further fix patch, could you let us know? I will test it. >> >> Well that's the problem. Since I can't run the reproducer, there's >> nothing I can do to troubleshoot the problem myself. > > We dug more into the perf and other profiling data from 0Day server > running this case, and it seems that the new simple_offset_add() > called by shmem_mknod() brings extra cost related with slab, > specifically the 'radix_tree_node', which cause the regression. Thank you! Will ponder. > Here is some slabinfo diff for commit a2e459555c5f and its parent: > > 23a31d87645c6527 a2e459555c5f9da3e619b7e47a6 > ---------------- --------------------------- > > 26363 +40.2% 36956 slabinfo.radix_tree_node.active_objs > 941.00 +40.4% 1321 slabinfo.radix_tree_node.active_slabs > 26363 +40.3% 37001 slabinfo.radix_tree_node.num_objs > 941.00 +40.4% 1321 slabinfo.radix_tree_node.num_slabs > > Also the perf profile show some difference > > 0.01 ±223% +0.1 0.10 ± 28% pp.self.shuffle_freelist > 0.00 +0.1 0.11 ± 40% pp.self.xas_create > 0.00 +0.1 0.12 ± 27% pp.self.xas_find_marked > 0.00 +0.1 0.14 ± 18% pp.self.xas_alloc > 0.03 ±103% +0.1 0.17 ± 29% pp.self.xas_descend > 0.00 +0.2 0.16 ± 23% pp.self.xas_expand > 0.10 ± 22% +0.2 0.27 ± 16% pp.self.rcu_segcblist_enqueue > 0.92 ± 35% +0.3 1.22 ± 11% pp.self.kmem_cache_free > 0.00 +0.4 0.36 ± 16% pp.self.xas_store > 0.32 ± 30% +0.4 0.71 ± 12% pp.self.__call_rcu_common > 0.18 ± 27% +0.5 0.65 ± 8% pp.self.kmem_cache_alloc_lru > 0.36 ± 79% +0.6 0.96 ± 15% pp.self.__slab_free > 0.00 +0.8 0.80 ± 14% pp.self.radix_tree_node_rcu_free > 0.00 +1.0 1.01 ± 16% pp.self.radix_tree_node_ctor > > Some perf profile from a2e459555c5f is: > > - 17.09% 0.09% singleuser [kernel.kallsyms] [k] path_openat > - 16.99% path_openat > - 12.23% open_last_lookups > - 11.33% lookup_open.isra.0 > - 9.05% shmem_mknod > - 5.11% simple_offset_add > - 4.95% __xa_alloc_cyclic > - 4.88% __xa_alloc > - 4.76% xas_store > - xas_create > - 2.40% xas_expand.constprop.0 > - 2.01% xas_alloc > - kmem_cache_alloc_lru > - 1.28% ___slab_alloc > - 1.22% allocate_slab > - 1.19% shuffle_freelist > - 1.04% setup_object > radix_tree_node_ctor > > Please let me know if you need more info. > >> >> Is there any hope in getting this reproducer to run on Fedora? > > Myself haven't succeeded to reproduce it locally, will keep trying > it tomorrow. > > Thanks, > Feng > >> >> -- >> Chuck Lever -- Chuck Lever