On Tue, May 11, 2021 at 11:51:57AM -0300, Marcelo Tosatti wrote: > On Tue, May 11, 2021 at 10:39:11AM -0400, Peter Xu wrote: > > On Fri, May 07, 2021 at 07:08:31PM -0300, Marcelo Tosatti wrote: > > > > Wondering whether we should add a pi_test_on() check in kvm_vcpu_has_events() > > > > somehow, so that even without customized ->vcpu_check_block we should be able > > > > to break the block loop (as kvm_arch_vcpu_runnable will return true properly)? > > > > > > static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) > > > { > > > int ret = -EINTR; > > > int idx = srcu_read_lock(&vcpu->kvm->srcu); > > > > > > if (kvm_arch_vcpu_runnable(vcpu)) { > > > kvm_make_request(KVM_REQ_UNHALT, vcpu); <--- > > > goto out; > > > } > > > > > > Don't want to unhalt the vcpu. > > > > Could you elaborate? It's not obvious to me why we can't do that if > > pi_test_on() returns true.. we have pending post interrupts anyways, so > > shouldn't we stop halting? Thanks! > > pi_test_on() only returns true when an interrupt is signalled by the > device. But the sequence of events is: > > > 1. pCPU idles without notification vector configured to wakeup vector. > > 2. PCI device is hotplugged, assigned device count increases from 0 to 1. > > <arbitrary amount of time> > > 3. device generates interrupt, sets ON bit to true in the posted > interrupt descriptor. > > We want to exit kvm_vcpu_block after 2, but before 3 (where ON bit > is not set). Ah yes.. thanks. Besides the current approach, I'm thinking maybe it'll be cleaner/less LOC to define a KVM_REQ_UNBLOCK to replace the pre_block hook (in x86's kvm_host.h): #define KVM_REQ_UNBLOCK KVM_ARCH_REQ(31) We can set it in vmx_pi_start_assignment(), then check+clear it in kvm_vcpu_has_events() (or make it a bool in kvm_vcpu struct?). The thing is current vmx_vcpu_check_block() is mostly a sanity check and copy-paste of the pi checks on a few items, so maybe cleaner to use KVM_REQ_UNBLOCK, as it might be reused in the future for re-evaluating of pre-block for similar purpose? No strong opinion, though. -- Peter Xu