On 20/01/2015 09:48, Wincy Van wrote: > +static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu, > + int vector) > +{ > + int r = 0; > + struct vmcs12 *vmcs12; > + > + /* > + * Since posted intr delivery is async, > + * we must aquire a spin-lock to avoid > + * the race of vmcs12. > + */ > + spin_lock(&to_vmx(vcpu)->nested.vmcs12_lock); > + vmcs12 = get_vmcs12(vcpu); > + if (!is_guest_mode(vcpu) || !vmcs12) { > + r = -1; > + goto out; > + } is_guest_mode should be checked first outside the lock, to avoid affecting the non-nested fast path. You can then recheck it later inside the lock. Another way to avoid the spinlock: in prepare_vmcs02 or a similar place, you can save vmcs12->posted_intr_nv in a new field vmx->nested.posted_intr_nv; just set it to -1 if !nested_cpu_has_posted_intr(vmcs12). In vmclear, again you just set the field to -1, and here you can do if (!is_guest_mode(vcpu) || vector != to_vmx(vcpu)->nested.posted_intr_nv) { r = -1; goto out; } You don't need to access vmcs12, and while there is a race, it's okay because there is no pointer access. > > + if (vcpu->mode == IN_GUEST_MODE) > + apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), > + POSTED_INTR_VECTOR); Please add a comment that PIR and ON have been set by the L1 hypervisor. I'll do a full review the other patches as soon as possible. Paolo > + else { > + r = -1; > + goto out; > + } > + > + /* > + * if posted intr is done by hardware, the > + * corresponding eoi was sent to L0. Thus > + * we should send eoi to L1 manually. > + */ > + kvm_apic_set_eoi_accelerated(vcpu, > + vmcs12->posted_intr_nv); -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html