Re: [PATCH RFC V4 4/5] kvm : pv-ticketlocks support for linux guests running on KVM hypervisor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Marcelo Tosatti <mtosatti@xxxxxxxxxx> [2012-01-17 09:02:11]:

> > +/* Kick vcpu waiting on @lock->head to reach value @ticket */
> > +static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
> > +{
> > +	int cpu;
> > +	int apicid;
> > +
> > +	add_stats(RELEASED_SLOW, 1);
> > +
> > +	for_each_cpu(cpu, &waiting_cpus) {
> > +		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
> > +		if (ACCESS_ONCE(w->lock) == lock &&
> > +		    ACCESS_ONCE(w->want) == ticket) {
> > +			add_stats(RELEASED_SLOW_KICKED, 1);
> > +			apicid = per_cpu(x86_cpu_to_apicid, cpu);
> > +			kvm_kick_cpu(apicid);
> > +			break;
> > +		}
> > +	}
> 
> What prevents a kick from being lost here, if say, the waiter is at
> local_irq_save in kvm_lock_spinning, before the lock/want assignments?

The waiter does check for lock becoming available before actually
sleeping:

+	/*
+        * check again make sure it didn't become free while
+        * we weren't looking.
+        */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+               add_stats(TAKEN_SLOW_PICKUP, 1);
+               goto out;
+	}

- vatsa

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux