Re: [PATCH] x86/mpx: fix recursive munmap() corruption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 20/04/2019 à 12:31, Michael Ellerman a écrit :
Thomas Gleixner <tglx@xxxxxxxxxxxxx> writes:
On Mon, 1 Apr 2019, Dave Hansen wrote:
diff -puN mm/mmap.c~mpx-rss-pass-no-vma mm/mmap.c
--- a/mm/mmap.c~mpx-rss-pass-no-vma	2019-04-01 06:56:53.409411123 -0700
+++ b/mm/mmap.c	2019-04-01 06:56:53.423411123 -0700
@@ -2731,9 +2731,17 @@ int __do_munmap(struct mm_struct *mm, un
  		return -EINVAL;
len = PAGE_ALIGN(len);
+	end = start + len;
  	if (len == 0)
  		return -EINVAL;
+ /*
+	 * arch_unmap() might do unmaps itself.  It must be called
+	 * and finish any rbtree manipulation before this code
+	 * runs and also starts to manipulate the rbtree.
+	 */
+	arch_unmap(mm, start, end);

...
-static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
-			      unsigned long start, unsigned long end)
+static inline void arch_unmap(struct mm_struct *mm, unsigned long start,
+			      unsigned long end)

While you fixed up the asm-generic thing, this breaks arch/um and
arch/unicorn32. For those the fixup is trivial by removing the vma
argument.

But itt also breaks powerpc and there I'm not sure whether moving
arch_unmap() to the beginning of __do_munmap() is safe. Micheal???

I don't know for sure but I think it should be fine. That code is just
there to handle CRIU unmapping/remapping the VDSO. So that either needs
to happen while the process is stopped or it needs to handle races
anyway, so I don't see how the placement within the unmap path should
matter.

My only concern is the error path.
Calling arch_unmap() before handling any error case means that it will have to be undo and there is no way to do so.

I don't know what is the rational to move arch_unmap() to the beginning of __do_munmap() but the error paths must be managed.

Aside of that the powerpc variant looks suspicious:

static inline void arch_unmap(struct mm_struct *mm,
                               unsigned long start, unsigned long end)
{
  	if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
                 mm->context.vdso_base = 0;
}

Shouldn't that be:

  	if (start >= mm->context.vdso_base && mm->context.vdso_base < end)

Hmm?

Yeah looks pretty suspicious. I'll follow-up with Laurent who wrote it.
Thanks for spotting it!

I've to admit that I had to read that code carefully before answering.

There are 2 assumptions here:
1. 'start' and 'end' are page aligned (this is guaranteed by __do_munmap(). 2. the VDSO is 1 page (this is guaranteed by the union vdso_data_store on powerpc).

The idea is to handle a munmap() call surrounding the VDSO area:
      | VDSO |
 ^start         ^end

This is covered by this test, as the munmap() matching the exact boundaries of the VDSO is handled too.

Am I missing something ?

Cheers,
Laurent.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux