Hello, Andrew! > Hello, folk! > > This is the v2, the series which tends to minimize the vmap > lock contention. It is based on the tag: v6.5-rc6. Here you > can find a documentation about it: > > wget ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf > > even though it is a bit outdated(it follows v1), it still gives a > good overview on the problem and how it can be solved. On demand > and by request i can update it. > > The v1 is here: https://lore.kernel.org/linux-mm/ZIAqojPKjChJTssg@pc636/T/ > > Delta v1 -> v2: > - open coded locking; > - switch to array of nodes instead of per-cpu definition; > - density is 2 cores per one node(not equal to number of CPUs); > - VAs first go back(free path) to an owner node and later to > a global heap if a block is fully freed, nid is saved in va->flags; > - add helpers to drain lazily-freed areas faster, if high pressure; > - picked al Reviewed-by. > > Test on AMD Ryzen Threadripper 3970X 32-Core Processor: > sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64 > > <v6.5-rc6 perf> > 94.17% 0.90% [kernel] [k] _raw_spin_lock > 93.27% 93.05% [kernel] [k] native_queued_spin_lock_slowpath > 74.69% 0.25% [kernel] [k] __vmalloc_node_range > 72.64% 0.01% [kernel] [k] __get_vm_area_node > 72.04% 0.89% [kernel] [k] alloc_vmap_area > 42.17% 0.00% [kernel] [k] vmalloc > 32.53% 0.00% [kernel] [k] __vmalloc_node > 24.91% 0.25% [kernel] [k] vfree > 24.32% 0.01% [kernel] [k] remove_vm_area > 22.63% 0.21% [kernel] [k] find_unlink_vmap_area > 15.51% 0.00% [unknown] [k] 0xffffffffc09a74ac > 14.35% 0.00% [kernel] [k] ret_from_fork_asm > 14.35% 0.00% [kernel] [k] ret_from_fork > 14.35% 0.00% [kernel] [k] kthread > <v6.5-rc6 perf> > vs > <v6.5-rc6+v2 perf> > 74.32% 2.42% [kernel] [k] __vmalloc_node_range > 69.58% 0.01% [kernel] [k] vmalloc > 54.21% 1.17% [kernel] [k] __alloc_pages_bulk > 48.13% 47.91% [kernel] [k] clear_page_orig > 43.60% 0.01% [unknown] [k] 0xffffffffc082f16f > 32.06% 0.00% [kernel] [k] ret_from_fork_asm > 32.06% 0.00% [kernel] [k] ret_from_fork > 32.06% 0.00% [kernel] [k] kthread > 31.30% 0.00% [unknown] [k] 0xffffffffc082f889 > 22.98% 4.16% [kernel] [k] vfree > 14.36% 0.28% [kernel] [k] __get_vm_area_node > 13.43% 3.35% [kernel] [k] alloc_vmap_area > 10.86% 0.04% [kernel] [k] remove_vm_area > 8.89% 2.75% [kernel] [k] _raw_spin_lock > 7.19% 0.00% [unknown] [k] 0xffffffffc082fba3 > 6.65% 1.37% [kernel] [k] free_unref_page > 6.13% 6.11% [kernel] [k] native_queued_spin_lock_slowpath > <v6.5-rc6+v2 perf> > > On smaller systems, for example, 8xCPU Hikey960 board the > contention is not that high and is approximately ~16 percent. > > Uladzislau Rezki (Sony) (9): > mm: vmalloc: Add va_alloc() helper > mm: vmalloc: Rename adjust_va_to_fit_type() function > mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c > mm: vmalloc: Remove global vmap_area_root rb-tree > mm: vmalloc: Remove global purge_vmap_area_root rb-tree > mm: vmalloc: Offload free_vmap_area_lock lock > mm: vmalloc: Support multiple nodes in vread_iter > mm: vmalloc: Support multiple nodes in vmallocinfo > mm: vmalloc: Set nr_nodes/node_size based on CPU-cores > > mm/vmalloc.c | 929 +++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 683 insertions(+), 246 deletions(-) > > -- > 2.30.2 > It would be good if this series somehow could be tested having some runtime from the people. So far there was a warning from the test robot: https://lore.kernel.org/lkml/202308292228.RRrGUYyB-lkp@xxxxxxxxx/T/#m397b3834cb3b7a0a53b8dffb3624384c8e278007 <snip> urezki@pc638:~/data/raid0/coding/linux.git$ git diff diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 08990f630c21..7105d7bcd37e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4778,7 +4778,7 @@ static void vmap_init_free_space(void) * |<--------------------------------->| */ for (busy = vmlist; busy; busy = busy->next) { - if (busy->addr - vmap_start > 0) { + if ((unsigned long) busy->addr - vmap_start > 0) { free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); if (!WARN_ON_ONCE(!free)) { free->va_start = vmap_start; urezki@pc638:~/data/raid0/coding/linux.git$ <snip> This extra patch has to be applied to fix the warning. >From my side i have tested it as much as i can. Can it be plugged into linux-next to get some runtime? Or is there any other way you prefer to go? Thank you in advance! -- Uladzislau Rezki