Linux expects that if a CPU modifies a memory location, then that modification will eventually become visible to other CPUs in the system. On Loongson-3 processor with SFB (Store Fill Buffer), loads may be prioritised over stores so it is possible for a store operation to be postponed if a polling loop immediately follows it. If the variable being polled indirectly depends on the outstanding store [for example, another CPU may be polling the variable that is pending modification] then there is the potential for deadlock if interrupts are disabled. This deadlock occurs in qspinlock code. This patch changes the definition of cpu_relax() to smp_mb() for Loongson-3, forcing a flushing of the SFB on SMP systems before the next load takes place. If the Kernel is not compiled for SMP support, this will expand to a barrier() as before. References: 534be1d5a2da940 (ARM: 6194/1: change definition of cpu_relax() for ARM11MPCore) Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Huacai Chen <chenhc@xxxxxxxxxx> --- arch/mips/include/asm/processor.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h index af34afb..a8c4a3a 100644 --- a/arch/mips/include/asm/processor.h +++ b/arch/mips/include/asm/processor.h @@ -386,7 +386,17 @@ unsigned long get_wchan(struct task_struct *p); #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[29]) #define KSTK_STATUS(tsk) (task_pt_regs(tsk)->cp0_status) +#ifdef CONFIG_CPU_LOONGSON3 +/* + * Loongson-3's SFB (Store-Fill-Buffer) may get starved when stuck in a read + * loop. Since spin loops of any kind should have a cpu_relax() in them, force + * a Store-Fill-Buffer flush from cpu_relax() such that any pending writes will + * become available as expected. + */ +#define cpu_relax() smp_mb() +#else #define cpu_relax() barrier() +#endif /* * Return_address is a replacement for __builtin_return_address(count) -- 2.7.0