[PATCH v2 0/9] Mitigate a vmap lock contention v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, folk!

This is the v2, the series which tends to minimize the vmap
lock contention. It is based on the tag: v6.5-rc6. Here you
can find a documentation about it:

wget ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf

even though it is a bit outdated(it follows v1), it still gives a
good overview on the problem and how it can be solved. On demand
and by request i can update it.

The v1 is here: https://lore.kernel.org/linux-mm/ZIAqojPKjChJTssg@pc636/T/

Delta v1 -> v2:
  - open coded locking;
  - switch to array of nodes instead of per-cpu definition;
  - density is 2 cores per one node(not equal to number of CPUs);
  - VAs first go back(free path) to an owner node and later to
    a global heap if a block is fully freed, nid is saved in va->flags;
  - add helpers to drain lazily-freed areas faster, if high pressure;
  - picked al Reviewed-by.

Test on AMD Ryzen Threadripper 3970X 32-Core Processor:
sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64

<v6.5-rc6 perf>
  94.17%     0.90%  [kernel]    [k] _raw_spin_lock
  93.27%    93.05%  [kernel]    [k] native_queued_spin_lock_slowpath
  74.69%     0.25%  [kernel]    [k] __vmalloc_node_range
  72.64%     0.01%  [kernel]    [k] __get_vm_area_node
  72.04%     0.89%  [kernel]    [k] alloc_vmap_area
  42.17%     0.00%  [kernel]    [k] vmalloc
  32.53%     0.00%  [kernel]    [k] __vmalloc_node
  24.91%     0.25%  [kernel]    [k] vfree
  24.32%     0.01%  [kernel]    [k] remove_vm_area
  22.63%     0.21%  [kernel]    [k] find_unlink_vmap_area
  15.51%     0.00%  [unknown]   [k] 0xffffffffc09a74ac
  14.35%     0.00%  [kernel]    [k] ret_from_fork_asm
  14.35%     0.00%  [kernel]    [k] ret_from_fork
  14.35%     0.00%  [kernel]    [k] kthread
<v6.5-rc6 perf>
   vs
<v6.5-rc6+v2 perf>
  74.32%     2.42%  [kernel]    [k] __vmalloc_node_range
  69.58%     0.01%  [kernel]    [k] vmalloc
  54.21%     1.17%  [kernel]    [k] __alloc_pages_bulk
  48.13%    47.91%  [kernel]    [k] clear_page_orig
  43.60%     0.01%  [unknown]   [k] 0xffffffffc082f16f
  32.06%     0.00%  [kernel]    [k] ret_from_fork_asm
  32.06%     0.00%  [kernel]    [k] ret_from_fork
  32.06%     0.00%  [kernel]    [k] kthread
  31.30%     0.00%  [unknown]   [k] 0xffffffffc082f889
  22.98%     4.16%  [kernel]    [k] vfree
  14.36%     0.28%  [kernel]    [k] __get_vm_area_node
  13.43%     3.35%  [kernel]    [k] alloc_vmap_area
  10.86%     0.04%  [kernel]    [k] remove_vm_area
   8.89%     2.75%  [kernel]    [k] _raw_spin_lock
   7.19%     0.00%  [unknown]   [k] 0xffffffffc082fba3
   6.65%     1.37%  [kernel]    [k] free_unref_page
   6.13%     6.11%  [kernel]    [k] native_queued_spin_lock_slowpath
<v6.5-rc6+v2 perf>

On smaller systems, for example, 8xCPU Hikey960 board the
contention is not that high and is approximately ~16 percent.

Uladzislau Rezki (Sony) (9):
  mm: vmalloc: Add va_alloc() helper
  mm: vmalloc: Rename adjust_va_to_fit_type() function
  mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c
  mm: vmalloc: Remove global vmap_area_root rb-tree
  mm: vmalloc: Remove global purge_vmap_area_root rb-tree
  mm: vmalloc: Offload free_vmap_area_lock lock
  mm: vmalloc: Support multiple nodes in vread_iter
  mm: vmalloc: Support multiple nodes in vmallocinfo
  mm: vmalloc: Set nr_nodes/node_size based on CPU-cores

 mm/vmalloc.c | 929 +++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 683 insertions(+), 246 deletions(-)

-- 
2.30.2





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux