On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote: > > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > local_irq_disable(); > > vcpu->mode = IN_GUEST_MODE; > > > > + /* > > + * If the guest requires direct access to mapped L1 pages, check > > + * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES > > + * to go and revalidate them, if necessary. > > + */ > > + if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps) > > + kvm_x86_ops.nested_ops->check_guest_maps(vcpu); > > But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next > vcpu_enter_guest() entry ? Sure, but that's why this call to ->check_guest_maps() comes just a few lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce us back out so that we go through vcpu_enter_guest() from the start again?
Attachment:
smime.p7s
Description: S/MIME cryptographic signature