AFAICS, given the definitions in there, you have __asm_copy_from_user_4(to, from, ret) expand to __asm_copy_from_user_4x_cont(to, from, ret, "", "", "") which expands to __asm_copy_user_cont(to, from, ret, " GETD D1Ar1,[%1++]\n" "2: SETD [%0++],D1Ar1\n" "", "3: ADD %2,%2,#4\n" " SETD [%0++],D1Ar1\n" "", " .long 2b,3b\n" "") and asm volatile ( " GETD D1Ar1,[%1++]\n" "2: SETD [%0++],D1Ar1\n" "" "1:\n" " .section .fixup,\"ax\"\n" " MOV D1Ar1,#0\n" "3: ADD %2,%2,#4\n" " SETD [%0++],D1Ar1\n" "" " MOVT D1Ar1,#HI(1b)\n" " JUMP D1Ar1,#LO(1b)\n" " .previous\n" " .section __ex_table,\"a\"\n" " .long 2b,3b\n" "" " .previous\n" : "=r" (to), "=r" (from), "=r" (ret) : "0" (to), "1" (from), "2" (ret) : "D1Ar1", "memory") Could you explain how does it ever manage to reach that MOV D1Ar1, #0 in the beginning of fixup? And I don't believe that it's something odd about exception fixup on metag, because __asm_copy_from_user_8(to, from, ret) expands to __asm_copy_from_user_8x_cont(to, from, ret, "", "", "") then to __asm_copy_from_user_4x_cont(to, from, ret, " GETD D1Ar1,[%1++]\n" "4: SETD [%0++],D1Ar1\n" "", "5: ADD %2,%2,#4\n" " SETD [%0++],D1Ar1\n" "", " .long 4b,5b\n" "") then to __asm_copy_user_cont(to, from, ret, " GETD D1Ar1,[%1++]\n" "2: SETD [%0++],D1Ar1\n" " GETD D1Ar1,[%1++]\n" "4: SETD [%0++],D1Ar1\n" "", "3: ADD %2,%2,#4\n" " SETD [%0++],D1Ar1\n" "5: ADD %2,%2,#4\n" " SETD [%0++],D1Ar1\n" "", " .long 2b,3b\n" " .long 4b,5b\n" "") and finally asm volatile ( " GETD D1Ar1,[%1++]\n" "2: SETD [%0++],D1Ar1\n" " GETD D1Ar1,[%1++]\n" "4: SETD [%0++],D1Ar1\n" "" "1:\n" " .section .fixup,\"ax\"\n" " MOV D1Ar1,#0\n" "3: ADD %2,%2,#4\n" " SETD [%0++],D1Ar1\n" "5: ADD %2,%2,#4\n" " SETD [%0++],D1Ar1\n" "" " MOVT D1Ar1,#HI(1b)\n" " JUMP D1Ar1,#LO(1b)\n" " .previous\n" " .section __ex_table,\"a\"\n" " .long 2b,3b\n" " .long 4b,5b\n" "" " .previous\n" : "=r" (to), "=r" (from), "=r" (ret) : "0" (to), "1" (from), "2" (ret) : "D1Ar1", "memory") I could believe in fixup for fault on the first word hitting one insn before 3:, but fault on the _second_ one sure as hell should not reach back that far, or they would've ended up identical and both storing 8 bytes of zeroes, overflowing the destination in case of the second fault. Moreover, it can't be relying upon the compiler leaving D1Ar1 zeroed on the entry somehow - failure on the second word would have it equal to the value of the first word, even if it had been zeroed to start with. How can that possibly work? And could somebody post a patch removing zeroing altogether, both from those macros and from __copy_user_zeroing()? copy_from_user() should be doing that memset() (it's on the slow path anyway). I would've done that myself, but faults on metag are rather odd and I've no way to test the damn thing, so I'd rather leave it to metag maintainers. FWIW, that has come up when doing uaccess unification; right now metag is one of two architectures still not converted to raw_copy_{to,from}_user() (the other one is itanic). See vfs.git#work.uaccess for details... Another question is with your __copy_to_user() - unless I'm misreading that code, it does *not* stop on store fault. That in itself wouldn't have been a bug, but... what happens if you ask it to copy 3 pages worth of data into a 3-page userland buffer, with the middle page unmapped? -- To unsubscribe from this list: send the line "unsubscribe linux-metag" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html