O Mon, Apr 24, 2023 at 06:34:19PM -0400, Liam R. Howlett wrote: > Cc linux-mm > > * Carlos Llamas <cmllamas@xxxxxxxxxx> [230424 16:56]: > > This reverts commit a43cfc87caaf46710c8027a8c23b8a55f1078f19. > > > > This patch fixed an issue reported by syzkaller in [1]. However, this > > turned out to be only a band-aid in binder. The root cause, as bisected > > by syzkaller, was fixed by commit 5789151e48ac ("mm/mmap: undo ->mmap() > > when mas_preallocate() fails"). We no longer need the patch for binder. > > > > Reverting such patch allows us to have a lockless access to alloc->vma > > in specific cases where the mmap_lock is not required. > > Can you elaborate on the situation where recording a VMA pointer and > later accessing it outside the mmap_lock is okay? The specifics are in the third patch of this patchset but the gist of it is that during ->mmap() handler, binder will complete the initialization of the binder_alloc structure. With the last step of this process being the caching of the vma pointer. Since the ordering is protected with a barrier we can then check alloc->vma to determine if the initialization has been completed. Since this check is part of the critical path for every single binder transaction, the performance plummeted when we started contending for the mmap_lock. In this particular case, binder doesn't actually use the vma. It only needs to know if the internal structure has been fully initialized and it is safe to use it. FWIW, this had been the design for ~15 years. The original patch is this: https://git.kernel.org/torvalds/c/457b9a6f09f0