Re: BUG in binder_vma_close() at mmap_assert_locked() in stable v5.15

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 23, 2022 at 1:50 PM Carlos Llamas <cmllamas@xxxxxxxxxx> wrote:
>
> On Tue, Sep 13, 2022 at 06:36:33AM +0000, Liam Howlett wrote:
> >
> > It sounds like the binder_alloc vma_vm_mm is being used unsafely as
> > well?  I'd actually go the other way with this and try to add more
> > validation that are optimized out on production builds.  Since binder is
> > saving a pointer to the mm struct and was saving the vma ponter, we
> > should be very careful around how we use them. Is the mutex in
> > binder_alloc protection enough for the vma binder buffers uses?  How is
> > the close() not being called before the exit_mmap() path?
>
> The alloc->mutex is top-level so it can't be used under vm_ops or we
> risk a possible deadlock with mmap_lock unfortunately.
>
> >
> > When you look at the mmget_not_zero() stuff, have a look at
> > binder_alloc_new_buf_locked().  I think it is unsafely using the
> > vma_vm_mm pointer without calling mmget_not_zero(), but the calling
> > function is rather large so I'm not sure.
>
> We had used mm safely in places like binder_update_page_range() but not
> so in the recent changes to switch over to vma_lookup(). It seems this
> can be an issue if ->release() races with binder_alloc_print_allocated()
> so I'll have a closer look at this.
>
>
> So a fix for the initial BUG concern has landed in v5.15.70:
> https://git.kernel.org/stable/c/899f4160b140
>
> However, after doing a deeper investigation it seems there is still an
> underlying problem. This requires a bit of context so please bear with
> me while I try to explain.
>
> It started with the maple tree patchset in linux-next which added a
> late allocation in mmap_region() in commit ("mm: start tracking VMAs
> with maple tree"). Syzbot failed this allocation and so mmap_region()
> unwinds, munmaps and frees the vma. This error path makes the cached
> alloc->vma in binder a dangling pointer.
>
> Liam explains the scenario here:
> https://lore.kernel.org/all/20220621020814.sjszxp3uz43rka62@revolver/
>
> Also, Liam correctly points out that is safer to lookup the vma instead
> of caching a pointer to it. Such change is what eventually is proposed
> as the fix to the fuzzer report.
>
> However, I wonder why isn't ->mmap() being undone for this exit path in
> mmap_region()? If anything fails _after_ call_mmap() it seems we
> silently unmap and free the vma. What about allocations, atomic_incs,
> and anything done inside ->mmap()?

I think if ->mmap() fails it is expected to undo its own changes
before returning the error. mmap_region() has no idea what kind of
changes ->mmap() has done before it failed, therefore it can't undo
them. At least that's how I understand it.

> Shouldn't a ->close() be needed here
> to undo these things as such:
> --
> @@ -1872,6 +1889,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>
>         return addr;
>
> +undo_mmap:
> +       if (vma->vm_ops && vma->vm_ops->close)
> +               vma->vm_ops->close(vma);
>  unmap_and_free_vma:
>         fput(vma->vm_file);
>         vma->vm_file = NULL;
> --
>

I don't see mmap_region() calling vma->vm_ops->open() anywhere. So why
would it have to call vma->vm_ops->close()?

> I managed to reproduce the same syzbot issue in v5.15.41 by failing the
> arch_validate_flags() check by simply passing PROT_MTE flag to mmap().
> I ran this in a "qemu-system-aarch64 -M virt,mte=on" system.
>
> Am I missing something? It looks scary to me all the memleaks, corrupt
> ref counts, etc. that could follow from this simple path.
>
> --
> Carlos Llamas




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux