UnixBench/Execl represents a class of workload where bash scripts are spawned frequently to do some short jobs. When running multiple parallel tasks, hot osq_lock is observed from do_mmap and exit_mmap. Both of them come from load_elf_binary through the call chain "execl->do_execveat_common->bprm_execve->load_elf_binary". In do_mmap,it will call mmap_region to create vma node, initialize it and insert it to vma maintain structure in mm_struct and i_mmap tree of the mapping file, then increase map_count to record the number of vma nodes used. The hot osq_lock is to protect operations on file’s i_mmap tree. For the mm_struct member change like vma insertion and map_count update, they do not affect i_mmap tree. Move those operations out of the lock's critical section, to reduce hold time on the lock. With this change, on Intel Sapphire Rapids 112C/224T platform, based on v6.0-rc6, the 160 parallel score improves by 12%. The patch has no obvious performance gain on v6.5-rc1 due to regression of this benchmark from this commit f1a7941243c102a44e8847e3b94ff4ff3ec56f25 (mm: convert mm's rss stats into percpu_counter). Related discussion and conclusion can be referred at the mail thread initiated by 0day as below: Link: https://lore.kernel.org/linux-mm/a4aa2e13-7187-600b-c628-7e8fb108def0@xxxxxxxxx/ Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> Signed-off-by: Yu Ma <yu.ma@xxxxxxxxx> --- v2 -> v3: Rebase the patch to v6.5-rc1, which includes 1c7873e3364 (mm: lock newly mapped VMA with corrected ordering), and update commit message to status on v6.5-rc1 v1 -> v2: Update vma_link() to reduce the hold time on file mapping lock as well. Based on v6.5-rc1, vma_link() is only called by insert_vm_struct () and copy_vma(), which are both protected by mmap_lock. --- --- mm/mmap.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 3eda23c9ebe7..ce31aec82e82 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -412,14 +412,11 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma) if (vma_iter_prealloc(&vmi)) return -ENOMEM; + vma_iter_store(&vmi, vma); + if (vma->vm_file) { mapping = vma->vm_file->f_mapping; i_mmap_lock_write(mapping); - } - - vma_iter_store(&vmi, vma); - - if (mapping) { __vma_link_file(vma, mapping); i_mmap_unlock_write(mapping); } @@ -2811,12 +2808,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr, /* Lock the VMA since it is modified after insertion into VMA tree */ vma_start_write(vma); - if (vma->vm_file) - i_mmap_lock_write(vma->vm_file->f_mapping); - vma_iter_store(&vmi, vma); mm->map_count++; if (vma->vm_file) { + i_mmap_lock_write(vma->vm_file->f_mapping); if (vma->vm_flags & VM_SHARED) mapping_allow_writable(vma->vm_file->f_mapping); -- 2.39.3