Re: [PATCH v2] parisc: Fix spinlock barriers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-07-18 9:08 a.m., John David Anglin wrote:
>>> -static inline void arch_spin_lock(arch_spinlock_t *x)
>>> +static inline int __pa_ldcw (volatile unsigned int *a)
>>> +{
>>> +#if __PA_LDCW_ALIGNMENT==16
>>> +	*(volatile char *)a = 0;
>>> +#endif
>>>
>>> I assume this is planned as a kind of prefetching into cache here?
>>> But doesn't it maybe introduce a bug when the first byte
>>> (in which you write zero) wasn't zero at the beginning?
>>> In that case the following ldcw():
> The intention is to dirty the cache line.  Note the above generates a stb instruction that operates
> on the most significant byte of the lock word.  The release uses a stw and sets bit 31 in the least
> significant byte of the spin lock word.  So, the stb doesn't affect the state of the lock.
>
> When the cache line is dirty, the ldcw instruction may be optimized to operate in cache.  This speeds
> up the operation.
>
> Another alternative is to use the stby instruction.  See the programming note on page 7-135 of the
> architecture manual.  It doesn't write anything when the address is the left most byte of a word but
> it still can be used to dirty the cache line.
Wait, you are correct.  We use other values to free the lock in entry.S and syscall.S.  Using the space register
value in entry.S  might be problematic as it's a long value.  Could we end up with the least significant 32 bits
all zero?

Dave

-- 
John David Anglin  dave.anglin@xxxxxxxx




[Index of Archives]     [Linux SoC]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux