Re: [PATCH 00/10] perf/uprobe: Optimize uprobes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 30, 2024 at 6:46 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Mon, Jul 22, 2024 at 12:09:21PM -0700, Suren Baghdasaryan wrote:
> > On Wed, Jul 10, 2024 at 2:40 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > >
> > > On Wed, Jul 10, 2024 at 11:16:31AM +0200, Peter Zijlstra wrote:
> > >
> > > > If it were an actual sequence count, I could make it work, but sadly,
> > > > not. Also, vma_end_write() seems to be missing :-( If anything it could
> > > > be used to lockdep annotate the thing.
> >
> > Thanks Matthew for forwarding me this discussion!
> >
> > > >
> > > > Mooo.. I need to stare more at this to see if perhaps it can be made to
> > > > work, but so far, no joy :/
> > >
> > > See, this is what I want, except I can't close the race against VMA
> > > modification because of that crazy locking scheme :/
> >
> > Happy to explain more about this crazy locking scheme. The catch is
> > that we can write-lock a VMA only while holding mmap_lock for write
> > and we unlock all write-locked VMAs together when we drop that
> > mmap_lock:
> >
> > mmap_write_lock(mm);
> > vma_start_write(vma1);
> > vma_start_write(vma2);
> > ...
> > mmap_write_unlock(mm); -> vma_end_write_all(mm); // unlocks all locked vmas
> >
> > This is done because oftentimes we need to lock multiple VMAs when
> > modifying the address space (vma merge/split) and unlocking them
> > individually would be more expensive than unlocking them in bulk by
> > incrementing mm->mm_lock_seq.
>
> Right, but you can do that without having it quite this insane.

I'm happy to take any suggestions that would improve the current mechanism.

>
> You can still make mm_lock_seq a proper seqcount, and still have
> vma_end_write() -- even if its an empty stub only used for validation.

It's doable but what will we be validating here? That the vma is indeed locked?

>
> That is, something like the below, which adds a light barrier, ensures
> that mm_lock_seq is a proper sequence count.
>
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index de9dc20b01ba..daa19d1a3022 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -104,6 +104,8 @@ static inline void mmap_write_lock(struct mm_struct *mm)
>  {
>         __mmap_lock_trace_start_locking(mm, true);
>         down_write(&mm->mmap_lock);
> +       WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq+1);
> +       smp_wmb();
>         __mmap_lock_trace_acquire_returned(mm, true, true);
>  }

Ok, I'll try the above change and check the benchmarks for any regressions.
Thanks for the suggestions, Peter!

>
>
> With the above addition we could write (although I think we still need
> the RCU_SLAB thing on files_cachep):
>
> static struct uprobe *__find_active_uprobe(unsigned long bp_vaddr)
> {
>         struct mm_struct *mm = current->mm;
>         struct uprobe *uprobe = NULL;
>         struct vm_area_struct *vma;
>         struct inode *inode;
>         loff_t offset;
>         int seq;
>
>         guard(rcu)();
>
>         seq = READ_ONCE(mm->mm_lock_seq);
>         smp_rmb();
>         do {
>                 vma = find_vma(mm, bp_vaddr);
>                 if (!vma)
>                         return NULL;
>
>                 if (!valid_vma(vma, false))
>                         return NULL;
>
>                 inode = file_inode(vma->vm_file);
>                 offset = vaddr_to_offset(vma, bp_vaddr);
>
>         } while (smp_rmb(), seq != READ_ONCE(mm->mm_lock_seq));
>
>         return find_uprobe(inode, offset);
> }
>





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux