Re: [PATCH] x86/mpx: fix recursive munmap() corruption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Laurent

Le 07/05/2019 à 18:35, Laurent Dufour a écrit :
Le 01/05/2019 à 12:32, Michael Ellerman a écrit :
Laurent Dufour <ldufour@xxxxxxxxxxxxxxxxxx> writes:
Le 23/04/2019 à 18:04, Dave Hansen a écrit :
On 4/23/19 4:16 AM, Laurent Dufour wrote:
...
There are 2 assumptions here:
   1. 'start' and 'end' are page aligned (this is guaranteed by __do_munmap().
   2. the VDSO is 1 page (this is guaranteed by the union vdso_data_store on powerpc)

Are you sure about #2?  The 'vdso64_pages' variable seems rather
unnecessary if the VDSO is only 1 page. ;)

Hum, not so sure now ;)
I got confused, only the header is one page.
The test is working as a best effort, and don't cover the case where
only few pages inside the VDSO are unmmapped (start >
mm->context.vdso_base). This is not what CRIU is doing and so this was
enough for CRIU support.

Michael, do you think there is a need to manage all the possibility
here, since the only user is CRIU and unmapping the VDSO is not a so
good idea for other processes ?

Couldn't we implement the semantic that if any part of the VDSO is
unmapped then vdso_base is set to zero? That should be fairly easy, eg:

    if (start < vdso_end && end >= mm->context.vdso_base)
        mm->context.vdso_base = 0;


We might need to add vdso_end to the mm->context, but that should be OK.

That seems like it would work for CRIU and make sense in general?

Sorry for the late answer, yes this would make more sense.

Here is a patch doing that.


In your patch, the test seems overkill:

+	if ((start <= vdso_base && vdso_end <= end) ||  /* 1   */
+	    (vdso_base <= start && start < vdso_end) || /* 3,4 */
+	    (vdso_base < end && end <= vdso_end))       /* 2,3 */
+		mm->context.vdso_base = mm->context.vdso_end = 0;

What about

	if (start < vdso_end && vdso_start < end)
		mm->context.vdso_base = mm->context.vdso_end = 0;

This should cover all cases, or am I missing something ?


And do we really need to store vdso_end in the context ?
I think it should be possible to re-calculate it: the size of the VDSO should be (&vdso32_end - &vdso32_start) + PAGE_SIZE for 32 bits VDSO, and (&vdso64_end - &vdso64_start) + PAGE_SIZE for the 64 bits VDSO.

Christophe





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux