4.3-stable review patch. If anyone has any objections, please let me know. ------------------ From: Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx> commit 57a65667991aaddef730b0c910111ab76a1ff245 upstream. The current arm64 __cmpxchg_double{_mb} implementations carry out the compare exchange by first comparing the old values passed in to the values read from the pointer provided and by stashing the cumulative bitwise difference in a 64-bit register. By comparing the register content against 0, it is possible to detect if the values read differ from the old values passed in, so that the compare exchange detects whether it has to bail out or carry on completing the operation with the exchange. Given the current implementation, to detect the cmpxchg operation status, the __cmpxchg_double{_mb} functions should return the 64-bit stashed bitwise difference so that the caller can detect cmpxchg failure by comparing the return value content against 0. The current implementation declares the return value as an int, which means that the 64-bit value stashing the bitwise difference is truncated before being returned to the __cmpxchg_double{_mb} callers, which means that any bitwise difference present in the top 32 bits goes undetected, triggering false positives and subsequent kernel failures. This patch fixes the issue by declaring the arm64 __cmpxchg_double{_mb} return values as a long, so that the bitwise difference is properly propagated on failure, restoring the expected behaviour. Fixes: e9a4b795652f ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU") Cc: Marc Zyngier <marc.zyngier@xxxxxxx> Acked-by: Will Deacon <will.deacon@xxxxxxx> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx> Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/arm64/include/asm/atomic_ll_sc.h | 2 +- arch/arm64/include/asm/atomic_lse.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -211,7 +211,7 @@ __CMPXCHG_CASE( , , mb_8, dmb ish, l, " #undef __CMPXCHG_CASE #define __CMPXCHG_DBL(name, mb, rel, cl) \ -__LL_SC_INLINE int \ +__LL_SC_INLINE long \ __LL_SC_PREFIX(__cmpxchg_double##name(unsigned long old1, \ unsigned long old2, \ unsigned long new1, \ --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -348,7 +348,7 @@ __CMPXCHG_CASE(x, , mb_8, al, "memory") #define __LL_SC_CMPXCHG_DBL(op) __LL_SC_CALL(__cmpxchg_double##op) #define __CMPXCHG_DBL(name, mb, cl...) \ -static inline int __cmpxchg_double##name(unsigned long old1, \ +static inline long __cmpxchg_double##name(unsigned long old1, \ unsigned long old2, \ unsigned long new1, \ unsigned long new2, \ -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html