On Wed, Dec 08, 2021, Paolo Bonzini wrote: > On 12/8/21 02:52, Sean Christopherson wrote: > > + /* > > + * Unload the AVIC when the vCPU is about to block,_before_ the vCPU > > + * actually blocks. The vCPU needs to be marked IsRunning=0 before the > > + * final pass over the vIRR via kvm_vcpu_check_block(). Any IRQs that > > + * arrive before IsRunning=0 will not signal the doorbell, i.e. it's > > + * KVM's responsibility to ensure there are no pending IRQs in the vIRR > > + * after IsRunning is cleared, prior to scheduling out the vCPU. > > I prefer to phrase this around paired memory barriers and the usual > store/smp_mb/load lockless idiom: I've no objection to that, my goal is/was purely to emphasize the need to manually process the vIRR after clearing IsRunning. > /* > * Unload the AVIC when the vCPU is about to block, _before_ > * the vCPU actually blocks. > * > * Any IRQs that arrive before IsRunning=0 will not cause an > * incomplete IPI vmexit on the source, It's not just IPIs, the GA log will also suffer the same fate. That's why I didn't mention incomplete VM-Exits. I'm certainly not opposed to that clarification, but I don't want readers to walk away thinking this is only a problem for IPIs. > therefore vIRR will also "s/vIRR will/the vIRR must" to make it abundantly clear that checking the vIRR is effectively a hard requirement. > * be checked by kvm_vcpu_check_block() before blocking. The > * memory barrier implicit in set_current_state orders writing set_current_state() > * IsRunning=0 before reading the vIRR. The processor needs a > * matching memory barrier on interrupt delivery between writing > * IRR and reading IsRunning; the lack of this barrier might be Missing the opening paranthesis. > * the cause of errata #1235). > */