On 19/01/2015 13:34, Wincy Van wrote: > > Actually, there is a race window between > vmx_deliver_nested_posted_interrupt and nested_release_vmcs12 > since posted intr delivery is async: > > cpu 1 > cpu 2 > (nested posted intr) (dest vcpu, > release vmcs12) > vmcs12 = get_vmcs12(vcpu); > if (!is_guest_mode(vcpu) || !vmcs12) { > r = -1; > goto out; > } > > > kunmap(vmx->nested.current_vmcs12_page); > > ...... > > > oops! current vmcs12 is invalid. > > However, we have already checked that the destination vcpu > is_in_guest_mode, and if L1 > want to destroy vmcs12(in handle_vmptrld/clear, etc..), the dest vcpu > must have done a nested > vmexit and a non-nested vmexit(handle_vmptr***). > > Hence, we can disable local interrupts while delivering nested posted > interrupts to make sure > we are faster than the destination vcpu. This is a bit tricky but it > an avoid that race. I think we > do not need to add a spin lock here. RCU does not fit this case, since > it will introduce a > new race window between the rcu handler and handle_vmptr**. > > I am wondering that whether there is a better way : ) Why not just use a spinlock? Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html