On Wed, Jul 5, 2023 at 12:06 PM Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> wrote: > > On Wed, Jul 05, 2023 at 01:33:48PM -0400, Liam R. Howlett wrote: > > * Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> [230705 12:54]: > > > On Tue, Jun 06, 2023 at 03:20:13PM -0400, Liam R. Howlett wrote: > > > > * Yu Ma <yu.ma@xxxxxxxxx> [230606 08:23]: > > > > > UnixBench/Execl represents a class of workload where bash scripts are > > > > > spawned frequently to do some short jobs. When running multiple parallel > > > > > tasks, hot osq_lock is observed from do_mmap and exit_mmap. Both of them > > > > > come from load_elf_binary through the call chain > > > > > "execl->do_execveat_common->bprm_execve->load_elf_binary". In do_mmap,it will > > > > > call mmap_region to create vma node, initialize it and insert it to vma > > > > > maintain structure in mm_struct and i_mmap tree of the mapping file, then > > > > > increase map_count to record the number of vma nodes used. The hot osq_lock > > > > > is to protect operations on file’s i_mmap tree. For the mm_struct member > > > > > change like vma insertion and map_count update, they do not affect i_mmap > > > > > tree. Move those operations out of the lock's critical section, to reduce > > > > > hold time on the lock. > > > > > > > > > > With this change, on Intel Sapphire Rapids 112C/224T platform, based on > > > > > v6.0-rc6, the 160 parallel score improves by 12%. The patch has no > > > > > obvious performance gain on v6.4-rc4 due to regression of this benchmark > > > > > from this commit f1a7941243c102a44e8847e3b94ff4ff3ec56f25 (mm: convert > > > > > mm's rss stats into percpu_counter). > > > > > > > > I didn't think it was safe to insert a VMA into the VMA tree without > > > > holding this write lock? We now have a window of time where a file > > > > mapping doesn't exist for a vma that's in the tree? Is this always > > > > safe? Does the locking order in mm/rmap.c need to change? > > > > > > We hold mmap lock on write here, right? > > > > Yes. > > > > >Who can observe the VMA until the > > > lock is released? > > > > With CONFIG_PER_VMA_LOCK we can have the VMA read under the rcu > > read lock for page faults from the tree. I am not sure if the vma is > > initialized to avoid page fault issues - vma_start_write() should either > > be taken or initialise the vma as this is the case. > > Right, with CONFIG_PER_VMA_LOCK the vma has to be unusable until it is > fully initialized, effectively providing the same guarantees as mmap write > lock. If it is not the case, it is CONFIG_PER_VMA_LOCK bug. Jumping into the conversation. If we are adding a VMA into the tree before it's fully usable then we should write-lock it before it becomes visible to page faults. Kirill is right that there is a problem and we should not rely on vma->vm_file->f_mapping lock here. Instead we should write-lock the VMA before adding it into the tree even without this change. IIUC, the rule with mmap_lock is that VMA can be safely modified if it is either isolated or if we write-locked the mmap_lock. With CONFIG_PER_VMA_LOCK the same rule should apply to per-VMA locks - the VMA should be either isolated or we should write-lock it. Here we are adding the unlocked VMA into the tree and then we keep modifying it. This did not bite us because modifications are only done to file-backed VMAs and we do not handle file-backed page faults under per-VMA locks yet, however this will become a problem once we start supporting that. If we all agree to the above I can post a change to lock the VMA before adding it into the tree. > > > There is also a possibility of a driver mapping a VMA and having entry > > points from other locations. It isn't accessed through the tree though > > so I don't think this change will introduce new races? > > Right. > > > > It cannot be retrieved from the VMA tree as it requires at least read mmap > > > lock. And the VMA doesn't exist anywhere else. > > > > > > I believe the change is safe. > > > > I guess insert_vm_struct(), and vma_link() callers should be checked and > > updated accordingly? > > Yep. > > -- > Kiryl Shutsemau / Kirill A. Shutemov >