Patch "arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()" has been added to the 4.7-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()

to the 4.7-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-spinlocks-implement-smp_mb__before_spinlock-as-smp_mb.patch
and it can be found in the queue-4.7 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.


>From 872c63fbf9e153146b07f0cece4da0d70b283eeb Mon Sep 17 00:00:00 2001
From: Will Deacon <will.deacon@xxxxxxx>
Date: Mon, 5 Sep 2016 11:56:05 +0100
Subject: arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()

From: Will Deacon <will.deacon@xxxxxxx>

commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream.

smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.

Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.

This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().

Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Reported-by: Alan Stern <stern@xxxxxxxxxxxxxxxxxxx>
Signed-off-by: Will Deacon <will.deacon@xxxxxxx>
Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>

---
 arch/arm64/include/asm/spinlock.h |   10 ++++++++++
 1 file changed, 10 insertions(+)

--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -363,4 +363,14 @@ static inline int arch_read_trylock(arch
 #define arch_read_relax(lock)	cpu_relax()
 #define arch_write_relax(lock)	cpu_relax()
 
+/*
+ * Accesses appearing in program order before a spin_lock() operation
+ * can be reordered with accesses inside the critical section, by virtue
+ * of arch_spin_lock being constructed using acquire semantics.
+ *
+ * In cases where this is problematic (e.g. try_to_wake_up), an
+ * smp_mb__before_spinlock() can restore the required ordering.
+ */
+#define smp_mb__before_spinlock()	smp_mb()
+
 #endif /* __ASM_SPINLOCK_H */


Patches currently in stable-queue which might be from will.deacon@xxxxxxx are

queue-4.7/arm64-spinlocks-implement-smp_mb__before_spinlock-as-smp_mb.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]