Re: [PATCH V3] x86 spinlock: Fix memory corruption on completing completions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/12, Raghavendra K T wrote:
>
> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>  	 * check again make sure it didn't become free while
>  	 * we weren't looking.
>  	 */
> -	if (ACCESS_ONCE(lock->tickets.head) == want) {
> +	head = ACCESS_ONCE(lock->tickets.head);
> +	if (__tickets_equal(head, want)) {
>  		add_stats(TAKEN_SLOW_PICKUP, 1);

While at it, perhaps it makes sense to s/ACCESS_ONCE/READ_ONCE/ but this
is cosmetic.

We also need to change another user of enter_slow_path, xen_lock_spinning()
in arch/x86/xen/spinlock.c.

Other than that looks correct at first glance... but this is up to
maintainers.

Oleg.

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux