Re: [PATCH v4 5/5] x86: drop mfence in favor of lock+addl

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 27, 2016 at 7:10 AM, Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
>
> -#define __smp_mb()     mb()
> +#define __smp_mb()     asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")

So this doesn't look right for x86-64. Using %esp rather than %rsp.
How did that even work for you?

                Linus
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux