Re: [PATCH liburing 2/2] Fix the use of memory barriers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/2/19 2:07 AM, Roman Penyaev wrote:
Hi Bart,

On 2019-07-01 23:42, Bart Van Assche wrote:

...

+#if defined(__x86_64__)
+#define smp_store_release(p, v)            \
+do {                        \
+    barrier();                \
+    WRITE_ONCE(*(p), (v));            \
+} while (0)
+
+#define smp_load_acquire(p)            \
+({                        \
+    typeof(*p) ___p1 = READ_ONCE(*(p));    \
+    barrier();                \
+    ___p1;                    \
+})

Can we have these two macros for x86_32 as well?
For i386 it will take another branch with full mb,
which is not needed.

Besides that both patches looks good to me.

Hi Roman,

Thanks for having taken a look. From Linux kernel source file arch/x86/include/asm/barrier.h:

#ifdef CONFIG_X86_32
#define mb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)",\
		"mfence", X86_FEATURE_XMM2) ::: "memory", "cc")
#define rmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)",\
		"lfence", X86_FEATURE_XMM2) ::: "memory", "cc")
#define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)",\
		"sfence", X86_FEATURE_XMM2) ::: "memory", "cc")
#else
#define mb() 	asm volatile("mfence":::"memory")
#define rmb()	asm volatile("lfence":::"memory")
#define wmb()	asm volatile("sfence" ::: "memory")
#endif

In other words, I think that 32-bit and 64-bit systems really have to be treated in a different way.

Bart.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux