On Mon, 5 Oct 2015, Jaccon Bastiaansen wrote: > We did some tests with different compilers, kernel versions and kernel > configs, with the following results: > > Linux 3.12.48, x86_64_defconfig, GCC 4.6.1 : > copy_user_generic_unrolled being used, so race condition possible > Linux 3.12.48, x86_64_defconfig, GCC 4.9.1 : > copy_user_generic_unrolled being used, so race condition possible > Linux 4.2.3, x86_64_defconfig, GCC 4.6.1 : 32 bit read being used, no > race condition > Linux 4.2.3, x86_64_defconfig, GCC 4.9.1 : 32 bit read being used, no > race condition > > > Our idea to fix this problem is use an explicit 32 bit read in > get_futex_value_locked() instead of using the generic function > copy_from_user_inatomic() and hoping the compiler uses an atomic > access and the right access size. You cannot use an explicit 32bit read. We need an access which handles the fault gracefully. In current mainline this is done proper: ret = __copy_from_user_inatomic(dst, src, size = sizeof(u32)) __copy_from_user_nocheck(dst, src, size) if (!__builtin_constant_p(size)) return copy_user_generic(dst, (__force void *)src, size); size is constant so we end up in the switch case switch(size) { case 4: __get_user_asm(*(u32 *)dst, (u32 __user *)src, ret, "l", "k", "=r", 4); return ret; .... In 3.12 this is different: __copy_from_user_inatomic() copy_user_generic() copy_user_generic_unrolled() So this is only an issue for kernel versions < 3.13. It was fixed with ff47ab4ff3cd: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic but nobody noticed that the race you described can happen, so it was never backported to the stable kernels. @stable: Can you please pick up ff47ab4ff3cd plus df90ca969035d x86, sparse: Do not force removal of __user when calling copy_to/from_user_nocheck() for stable kernels <= 3.12? If that's too much of churn, then I can come up with an explicit fix for this. Let me know. Thanks, tglx -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html