On Wed, Dec 14, 2022 at 01:39:59PM -0600, Michael Roth wrote: > From: Nikunj A Dadhania <nikunj@xxxxxxx> > > KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a > change in the page encryption status to the hypervisor. > > The hypercall exits to userspace with KVM_EXIT_HYPERCALL exit code, > currently this is used for explicit memory conversion between > shared/private for memfd based private memory. > > Signed-off-by: Nikunj A Dadhania <nikunj@xxxxxxx> > Signed-off-by: Michael Roth <michael.roth@xxxxxxx> > --- > arch/x86/kvm/x86.c | 8 ++++++++ > virt/kvm/kvm_main.c | 1 + > 2 files changed, 9 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index bb6adb216054..732f9cbbadb5 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -9649,6 +9649,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) Couldn't find a better commit to comment on: when the guest has the ptp-kvm module, it will issue a KVM_HC_CLOCK_PAIRING hypercall. This will pass sev_es_validate_vmgexit validation and end up in this function where kvm_pv_clock_pairing() is called, and that calls kvm_write_guest(). This results in a CPU soft-lockup, at least in my testing. Are there any emulated hypercalls that make sense for snp guests? We should block at least the ones that definitely don't work. Jeremi > break; > case KVM_HC_MAP_GPA_RANGE: { > u64 gpa = a0, npages = a1, attrs = a2; > + struct kvm_memory_slot *slot; > > ret = -KVM_ENOSYS; > if (!(vcpu->kvm->arch.hypercall_exit_enabled & (1 << KVM_HC_MAP_GPA_RANGE))) > @@ -9660,6 +9661,13 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) > break; > } > > + slot = kvm_vcpu_gfn_to_memslot(vcpu, gpa_to_gfn(gpa)); > + if (!vcpu->kvm->arch.upm_mode || > + !kvm_slot_can_be_private(slot)) { > + ret = 0; > + break; > + } > + > vcpu->run->exit_reason = KVM_EXIT_HYPERCALL; > vcpu->run->hypercall.nr = KVM_HC_MAP_GPA_RANGE; > vcpu->run->hypercall.args[0] = gpa; > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index d2daa049e94a..73bf0bdedb59 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2646,6 +2646,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn > > return NULL; > } > +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot); > > bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn) > { > -- > 2.25.1 >