Re: [PATCH 1/2] arm64: atomics: fix use of acquire + release for full barrier semantics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 04, 2014 at 12:29:12PM +0000, Will Deacon wrote:
> @@ -112,17 +114,20 @@ static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
>  	unsigned long tmp;
>  	int oldval;
>  
> +	smp_mb();
> +
>  	asm volatile("// atomic_cmpxchg\n"
> -"1:	ldaxr	%w1, %2\n"
> +"1:	ldxr	%w1, %2\n"
>  "	cmp	%w1, %w3\n"
>  "	b.ne	2f\n"
> -"	stlxr	%w0, %w4, %2\n"
> +"	stxr	%w0, %w4, %2\n"
>  "	cbnz	%w0, 1b\n"
>  "2:"
>  	: "=&r" (tmp), "=&r" (oldval), "+Q" (ptr->counter)
>  	: "Ir" (old), "r" (new)
>  	: "cc", "memory");
>  
> +	smp_mb();
>  	return oldval;
>  }
>  

Any particular reason atomic_cmpxchg() doesn't use the proposed rel + mb
scheme? It would be a waste to have atomic_cmpxchg() be more expensive
than it needs to be.
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]