Re: [RFC PATCH 2/3] Revert "android: binder: stop saving a pointer to the VMA"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Carlos Llamas <cmllamas@xxxxxxxxxx> [230426 17:17]:
> On Mon, Apr 24, 2023 at 09:43:28PM -0400, Liam R. Howlett wrote:
> > * Carlos Llamas <cmllamas@xxxxxxxxxx> [230424 19:11]:
> > > 
> > > The specifics are in the third patch of this patchset but the gist of it
> > > is that during ->mmap() handler, binder will complete the initialization
> > > of the binder_alloc structure. With the last step of this process being
> > > the caching of the vma pointer. Since the ordering is protected with a
> > > barrier we can then check alloc->vma to determine if the initialization
> > > has been completed.
> > > 
> > > Since this check is part of the critical path for every single binder
> > > transaction, the performance plummeted when we started contending for
> > > the mmap_lock. In this particular case, binder doesn't actually use the
> > > vma.
> > 
> > So why does binder_update_page_range() take the mmap_read_lock then use
> > the cached vma in the reverted patch?
> > 
> > If you want to use it as a flag to see if the driver is initialized, why
> > not use the cached address != 0?
> > 
> > Or better yet,
> > 
> > >It only needs to know if the internal structure has been fully
> > > initialized and it is safe to use it.
> > 
> > This seems like a good reason to use your own rwsem.  This is,
> > essentially, rolling your own lock with
> > smp_store_release()/smp_load_acquire() and a pointer which should not be
> > cached.
> 
> We can't use an rwsem to protect the initialization. We already have an
> alloc->mutex which would be an option. However, using it under ->mmap()
> would only lead to dead-locks with the mmap_lock.
> 
> I agree with you that we could use some other flag instead of the vma
> pointer to signal the initialization. I've actually tried several times
> to come up with a scenario in which caching the vma pointer becomes an
> issue to stop doing this altogether. However, I can't find anything
> concrete.
> 
> I don't think the current solution in which we do all these unnecessary
> vma lookups is correct. Instead, I'm currently working on a redesign of
> this section in which binder stops to allocate/insert pages manually. We
> should be making use of the page-fault handler and let the infra handle
> all the work. The overall idea is here:
> https://lore.kernel.org/all/ZEGh4mliGHvyWIvo@xxxxxxxxxx/
> 
> It's hard to make the case for just dropping the vma pointer after ~15
> years and take the performance hit without having an actual issue to
> support this idea. So I'll revert this for now and keep working on the
> page-fault solution.
> 

I came across this [1] when I was looking into something else and
thought I'd double back and make sure your fix for this UAF is also
included, since your revert will restore this bug.

I do still see the mmap_read_lock() in binder_update_page_range() vs the
required mmap_write_lock(), at least in my branch.

[1] https://lore.kernel.org/all/20221104175450.306810-1-cmllamas@xxxxxxxxxx/

Thanks,
Liam




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux