Re: [heads-up] weirdness in arch/metag/lib/usercopy.c

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Al,

On Wed, Mar 29, 2017 at 12:55:02AM +0100, Al Viro wrote:
> 	AFAICS, given the definitions in there, you have
> 	__asm_copy_from_user_4(to, from, ret)
> expand to
> 	__asm_copy_from_user_4x_cont(to, from, ret, "", "", "")
> which expands to
> 	__asm_copy_user_cont(to, from, ret,
>                 "       GETD D1Ar1,[%1++]\n"
>                 "2:     SETD [%0++],D1Ar1\n" "",
>                 "3:     ADD  %2,%2,#4\n"
>                 "       SETD [%0++],D1Ar1\n" "",
>                 "       .long 2b,3b\n" "")
> and
>         asm volatile (
>                 "       GETD D1Ar1,[%1++]\n"
>                 "2:     SETD [%0++],D1Ar1\n" ""
>                 "1:\n"
>                 "       .section .fixup,\"ax\"\n"
>                 "       MOV D1Ar1,#0\n"
>                 "3:     ADD  %2,%2,#4\n"
>                 "       SETD [%0++],D1Ar1\n" ""
>                 "       MOVT    D1Ar1,#HI(1b)\n"
>                 "       JUMP    D1Ar1,#LO(1b)\n"
>                 "       .previous\n"
>                 "       .section __ex_table,\"a\"\n"
>                 "       .long 2b,3b\n" ""
>                 "       .previous\n"
>                 : "=r" (to), "=r" (from), "=r" (ret)
>                 : "0" (to), "1" (from), "2" (ret)
>                 : "D1Ar1", "memory")
> 
> Could you explain how does it ever manage to reach that
> 		MOV D1Ar1, #0
> in the beginning of fixup?  And I don't believe that it's something odd
> about exception fixup on metag, because
> 	__asm_copy_from_user_8(to, from, ret)
> expands to
> 	__asm_copy_from_user_8x_cont(to, from, ret, "", "", "")
> then to
>         __asm_copy_from_user_4x_cont(to, from, ret,
>                 "       GETD D1Ar1,[%1++]\n"
>                 "4:     SETD [%0++],D1Ar1\n" "",
>                 "5:     ADD  %2,%2,#4\n"
>                 "       SETD [%0++],D1Ar1\n" "",
>                 "       .long 4b,5b\n" "")
> then to
>         __asm_copy_user_cont(to, from, ret,
>                 "       GETD D1Ar1,[%1++]\n"
>                 "2:     SETD [%0++],D1Ar1\n"
>                 "       GETD D1Ar1,[%1++]\n"
>                 "4:     SETD [%0++],D1Ar1\n" "",
>                 "3:     ADD  %2,%2,#4\n"
>                 "       SETD [%0++],D1Ar1\n"
>                 "5:     ADD  %2,%2,#4\n"
>                 "       SETD [%0++],D1Ar1\n" "",
>                 "       .long 2b,3b\n"
>                 "       .long 4b,5b\n" "")
> and finally
>         asm volatile (
>                 "       GETD D1Ar1,[%1++]\n"
>                 "2:     SETD [%0++],D1Ar1\n"
>                 "       GETD D1Ar1,[%1++]\n"
>                 "4:     SETD [%0++],D1Ar1\n" ""
>                 "1:\n" 
>                 "       .section .fixup,\"ax\"\n"
>                 "       MOV D1Ar1,#0\n"
>                 "3:     ADD  %2,%2,#4\n"
>                 "       SETD [%0++],D1Ar1\n"
>                 "5:     ADD  %2,%2,#4\n"
>                 "       SETD [%0++],D1Ar1\n" ""
>                 "       MOVT    D1Ar1,#HI(1b)\n"
>                 "       JUMP    D1Ar1,#LO(1b)\n"
>                 "       .previous\n"
>                 "       .section __ex_table,\"a\"\n"
>                 "       .long 2b,3b\n"
>                 "       .long 4b,5b\n" ""
>                 "       .previous\n"
>                 : "=r" (to), "=r" (from), "=r" (ret)
>                 : "0" (to), "1" (from), "2" (ret)
>                 : "D1Ar1", "memory")
> 
> I could believe in fixup for fault on the first word hitting one insn
> before 3:, but fault on the _second_ one sure as hell should not reach
> back that far, or they would've ended up identical and both storing 8 bytes
> of zeroes, overflowing the destination in case of the second fault.
> Moreover, it can't be relying upon the compiler leaving D1Ar1 zeroed on
> the entry somehow - failure on the second word would have it equal to
> the value of the first word, even if it had been zeroed to start with.

Thanks for the heads up. Yeh that is rather suspect, I'll look into it.

The first faulting byte/word/doubleword on <8 byte aligned copies from
user will appear to get either the low x bits of the kernel destination
address (if first copy) or a copy of the previous copy.

The MOV of 0 to D1Ar1 prior to the fixup label dates back a long way
too, through various refactors to the initial addition of proper fixup
code on July 12th 2006 (based on v2.4.32).

> 
> How can that possibly work?  And could somebody post a patch removing
> zeroing altogether, both from those macros and from __copy_user_zeroing()?
> copy_from_user() should be doing that memset() (it's on the slow path
> anyway).

Agreed.

> I would've done that myself, but faults on metag are rather
> odd and I've no way to test the damn thing, so I'd rather leave it to
> metag maintainers.  FWIW, that has come up when doing uaccess unification;
> right now metag is one of two architectures still not converted to
> raw_copy_{to,from}_user() (the other one is itanic).  See vfs.git#work.uaccess
> for details...
> 
> Another question is with your __copy_to_user() - unless I'm misreading that
> code, it does *not* stop on store fault.  That in itself wouldn't have been
> a bug, but... what happens if you ask it to copy 3 pages worth of data into
> a 3-page userland buffer, with the middle page unmapped?

Indeed, that needs fixing too.

Thanks
James

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux ARM Kernel]     [Linux Wireless]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux