On Wed, Sep 07, 2022, Sean Christopherson wrote: > On Wed, Sep 07, 2022, Yuan Yao wrote: > > On Tue, Sep 06, 2022 at 09:26:33PM -0700, Mingwei Zhang wrote: > > > > > @@ -10700,6 +10706,12 @@ static int vcpu_run(struct kvm_vcpu *vcpu) > > > > > if (kvm_cpu_has_pending_timer(vcpu)) > > > > > kvm_inject_pending_timer_irqs(vcpu); > > > > > > > > > > + if (vcpu->arch.nested_get_pages_pending) { > > > > > + r = kvm_get_nested_state_pages(vcpu); > > > > > + if (r <= 0) > > > > > + break; > > > > > + } > > > > > + > > > > > > > > Will this leads to skip the get_nested_state_pages for L2 first time > > > > vmentry in every L2 running iteration ? Because with above changes > > > > KVM_REQ_GET_NESTED_STATE_PAGES is not set in > > > > nested_vmx_enter_non_root_mode() and > > > > vcpu->arch.nested_get_pages_pending is not checked in > > > > vcpu_enter_guest(). > > > > > > > Good catch. I think the diff won't work when vcpu is runnable. > > It works, but it's inefficient if the request comes from KVM_SET_NESTED_STATE. > The pending KVM_REQ_UNBLOCK that comes with the flag will prevent actually running > the guest. Specifically, this chunk of code will detect the pending request and > bail out of vcpu_enter_guest(). > > if (kvm_vcpu_exit_request(vcpu)) { > vcpu->mode = OUTSIDE_GUEST_MODE; > smp_wmb(); > local_irq_enable(); > preempt_enable(); > kvm_vcpu_srcu_read_lock(vcpu); > r = 1; > goto cancel_injection; > } > > But the inefficiency is a non-issue since "true" emulation of VM-Enter will flow > through this path (the VMRESUME/VMLAUNCH/VMRUN exit handler runs at the end of > vcpu_enter_guest(). Actually, nested VM-Enter doesn't use this path at all. The above holds true for emulated RSM, but that's largely a moot point since RSM isn't exactly a hot path.