On 7/2/19 2:07 AM, Roman Penyaev wrote:
Hi Bart,
On 2019-07-01 23:42, Bart Van Assche wrote:
...
+#if defined(__x86_64__)
+#define smp_store_release(p, v) \
+do { \
+ barrier(); \
+ WRITE_ONCE(*(p), (v)); \
+} while (0)
+
+#define smp_load_acquire(p) \
+({ \
+ typeof(*p) ___p1 = READ_ONCE(*(p)); \
+ barrier(); \
+ ___p1; \
+})
Can we have these two macros for x86_32 as well?
For i386 it will take another branch with full mb,
which is not needed.
Besides that both patches looks good to me.
Hi Roman,
Thanks for having taken a look. From Linux kernel source file
arch/x86/include/asm/barrier.h:
#ifdef CONFIG_X86_32
#define mb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)",\
"mfence", X86_FEATURE_XMM2) ::: "memory", "cc")
#define rmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)",\
"lfence", X86_FEATURE_XMM2) ::: "memory", "cc")
#define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)",\
"sfence", X86_FEATURE_XMM2) ::: "memory", "cc")
#else
#define mb() asm volatile("mfence":::"memory")
#define rmb() asm volatile("lfence":::"memory")
#define wmb() asm volatile("sfence" ::: "memory")
#endif
In other words, I think that 32-bit and 64-bit systems really have to be
treated in a different way.
Bart.