Re: [PATCH V2 12/12] MIPS: Loongson: Introduce and use WAR_LLSC_MB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 27, 2018 at 11:23:01AM +0800, Huacai Chen wrote:
> On the Loongson-2G/2H/3A/3B there is a hardware flaw that ll/sc and
> lld/scd is very weak ordering. We should add sync instructions before
> each ll/lld and after the last sc/scd to workaround. Otherwise, this
> flaw will cause deadlock occationally (e.g. when doing heavy load test
> with LTP).

How confident are you that this is the minimal change required to fix
the issue? Is the problem well understood?

It'd be helpful to have some more details about the flaw if you have
them.

I.e. does it really have to be done on every loop iteration (such that
using WEAK_REORDERING_BEYOND_LLSC is insufficient)?

> diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
> index 0ab176b..99a6d01 100644
> --- a/arch/mips/include/asm/atomic.h
> +++ b/arch/mips/include/asm/atomic.h
> @@ -62,6 +62,7 @@ static __inline__ void atomic_##op(int i, atomic_t * v)			      \
>  		do {							      \
>  			__asm__ __volatile__(				      \
>  			"	.set	"MIPS_ISA_LEVEL"		\n"   \
> +			__WAR_LLSC_MB					      \
>  			"	ll	%0, %1		# atomic_" #op "\n"   \
>  			"	" #asm_op " %0, %2			\n"   \
>  			"	sc	%0, %1				\n"   \
> @@ -69,6 +70,7 @@ static __inline__ void atomic_##op(int i, atomic_t * v)			      \
>  			: "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter)  \
>  			: "Ir" (i));					      \
>  		} while (unlikely(!temp));				      \
> +		__asm__ __volatile__(__WAR_LLSC_MB : : :"memory");	      \

This still results in an additional compiler barrier on other platforms,
so if it must remain it needs abstracting.

> diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
> index 0e8e6af..268d921 100644
> --- a/arch/mips/include/asm/barrier.h
> +++ b/arch/mips/include/asm/barrier.h
> @@ -203,6 +203,12 @@
>  #define __WEAK_LLSC_MB		"		\n"
>  #endif
>  
> +#if defined(CONFIG_LOONGSON3) && defined(CONFIG_SMP) /* Loongson-3's LLSC workaround */
> +#define __WAR_LLSC_MB		"	sync	\n"
> +#else
> +#define __WAR_LLSC_MB		"		\n"
> +#endif

A comment explaining the whole issue would be helpful for others trying
to decipher this.

> diff --git a/arch/mips/loongson64/Platform b/arch/mips/loongson64/Platform
> index 0fce460..3700dcf 100644
> --- a/arch/mips/loongson64/Platform
> +++ b/arch/mips/loongson64/Platform
> @@ -23,6 +23,9 @@ ifdef CONFIG_CPU_LOONGSON2F_WORKAROUNDS
>  endif
>  
>  cflags-$(CONFIG_CPU_LOONGSON3)	+= -Wa,--trap
> +ifneq ($(call as-option,-Wa$(comma)-mfix-loongson3-llsc,),)
> +  cflags-$(CONFIG_CPU_LOONGSON3) += -Wa$(comma)-mno-fix-loongson3-llsc
> +endif

Could this be a separate patch?

This needs more explanation.
- What does this do exactly?
- Why are you turning *OFF* the compiler fix?
- Was some fix we don't want already in use by default?

> diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
> index 3d3dfba..a507ba7 100644
> --- a/arch/mips/mm/tlbex.c
> +++ b/arch/mips/mm/tlbex.c
> @@ -919,6 +919,8 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
>  		 * to mimic that here by taking a load/istream page
>  		 * fault.
>  		 */
> +		if (current_cpu_type() == CPU_LOONGSON3)
> +			uasm_i_sync(p, 0);

I suggest abstracting this out with a nice comment explaining why it is
necessary.

Cheers
James

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux