On Sat, 2021-11-20 at 18:30 +0200, Mika Penttilä wrote: > > On 20.11.2021 18.21, David Woodhouse wrote: > > On Sat, 2021-11-20 at 17:48 +0200, Mika Penttilä wrote: > > > > @@ -9785,6 +9787,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > > > local_irq_disable(); > > > > vcpu->mode = IN_GUEST_MODE; > > > > > > > > + /* > > > > + * If the guest requires direct access to mapped L1 pages, check > > > > + * the caches are valid. Will raise KVM_REQ_GET_NESTED_STATE_PAGES > > > > + * to go and revalidate them, if necessary. > > > > + */ > > > > + if (is_guest_mode(vcpu) && kvm_x86_ops.nested_ops->check_guest_maps) > > > > + kvm_x86_ops.nested_ops->check_guest_maps(vcpu); > > > > > > But KVM_REQ_GET_NESTED_STATE_PAGES is not check until next > > > vcpu_enter_guest() entry ? > > > > Sure, but that's why this call to ->check_guest_maps() comes just a few > > lines *before* the 'if (kvm_cpu_exit_request(vcpu))' that will bounce > > us back out so that we go through vcpu_enter_guest() from the start > > again? > > Yes, I had forgotten that...= Having said that, I think Paolo was rightly trying to persuade me that we don't need the check there at all anyway. Because the *invalidation* will raise KVM_REQ_GPC_INVALIDATE, and that's sufficient to bounce us out of IN_GUEST_MODE without us having to check the GPC explicitly. We do need to ensure that we are up to date with memslots changes, but that's a bit simpler, and we have kvm_arch_memslots_changed() waking us for that too. So yeah, with sufficient thought I think we can be entirely event- driven instead of having to have an explicit *check* on entry.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature