On 27/12/2017 13:52, Liran Alon wrote: > > The race-condition described in commit message of patch 9 is fixed > in patch 9. It is fixed because we get rid of the dependency on > KVM_REQ_EVENT. Or is it fixed because, while getting rid of the dependency on KVM_REQ_EVENT, you had to move kvm_vcpu_trigger_posted_interrupt to the correct place: - /* the PIR and ON have been set by L1. */ - kvm_vcpu_trigger_posted_interrupt(vcpu, true); /* * If a posted intr is not recognized by hardware, * we will accomplish it in the next vmentry. */ vmx->nested.pi_pending = true; - kvm_make_request(KVM_REQ_EVENT, vcpu); + /* the PIR and ON have been set by L1. */ + kvm_vcpu_trigger_posted_interrupt(vcpu, true); return 0; ? Getting rid of KVM_REQ_EVENT is not a requirement to fix the vcpu->requests race. There is canonical way to avoid that, which is to use kvm_vcpu_kick. To me the main value of patch 9 is introducing the nested PI handler. The fact that patch 9 fixes that race condition is almost a side effect (and in fact it's not entirely fixed until patch 10, in my opinion). Paolo > If the target vCPU thread passes the check for pending > kvm requests, it means it is already running with interrupts disabled > and therefore the physical IPI of POSTED_INTR_NESTED_VECTOR will be > received in guest which will process nested posted-interrupts > correctly. If guest will exit because of another external-interrupt > before the physical IPI will be received, on next VMEntry, code will > note there are pending nested posted-interrupts and re-trigger > physical IPI of POSTED_INTR_NESTED_VECTOR. Therefore, nested > posted-interrupts will eventually be processed in guest. This is in > contrast to what happens before patch 9 where L2 guest will continue > running until next time KVM_REQ_EVENT will be consumed. Therefore, > bug is indeed fixed in patch 9.