Re: [PATCH v2] parisc: Fix spinlock barriers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave,

just a few questions and suggestions below....

On 17.07.20 23:15, John David Anglin wrote:
> diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
> index 70fecb8dc4e2..a23f7bc5d189 100644
> --- a/arch/parisc/include/asm/spinlock.h
> +++ b/arch/parisc/include/asm/spinlock.h
> @@ -10,34 +10,54 @@
>  static inline int arch_spin_is_locked(arch_spinlock_t *x)
>  {
>  	volatile unsigned int *a = __ldcw_align(x);
> -	smp_mb();
> -	return *a == 0;
> +	return READ_ONCE(*a) == 0;
>  }
>
> -static inline void arch_spin_lock(arch_spinlock_t *x)
> +static inline int __pa_ldcw (volatile unsigned int *a)
> +{
> +#if __PA_LDCW_ALIGNMENT==16
> +	*(volatile char *)a = 0;
> +#endif

I assume this is planned as a kind of prefetching into cache here?
But doesn't it maybe introduce a bug when the first byte
(in which you write zero) wasn't zero at the beginning?
In that case the following ldcw():

> +	return __ldcw(a);

returns zero, althought it wasn't zero initially?


> +}
> +


> diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
> index f05c9d5b6b9e..4ba43a3c7c67 100644
> --- a/arch/parisc/kernel/syscall.S
> +++ b/arch/parisc/kernel/syscall.S
> @@ -659,8 +678,15 @@ cas_action:
>  	/* Error occurred on load or store */
>  	/* Free lock */
>  #ifdef CONFIG_SMP
> -98:	LDCW	0(%sr2,%r20), %r1			/* Barrier */
> +98:	sync			/* Barrier */
>  99:	ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
> +	nop
> +	nop
> +	nop
> +	nop
> +	nop
> +	nop
> +	nop

I think you should put those nop's before the 99:ALTERNATIVE() instruction above.
That way they all get replaced by one jump instead of multiple nops (when
there is only one CPU in the SMP system).

Helge




[Index of Archives]     [Linux SoC]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux