On Thu, 2022-04-14 at 15:19 +0200, Vitaly Kuznetsov wrote: > Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition > assist page address to handle L2 TLB flush requests. > > Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> > --- > arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++ > arch/x86/kvm/svm/nested.c | 2 ++ > 2 files changed, 18 insertions(+) > > diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h > index 7d6d97968fb9..8cf702fed7e5 100644 > --- a/arch/x86/kvm/svm/hyperv.h > +++ b/arch/x86/kvm/svm/hyperv.h > @@ -9,6 +9,7 @@ > #include <asm/mshyperv.h> > > #include "../hyperv.h" > +#include "svm.h" > > /* > * Hyper-V uses the software reserved 32 bytes in VMCB > @@ -32,4 +33,19 @@ struct hv_enlightenments { > */ > #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW > > +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) > +{ > + struct vcpu_svm *svm = to_svm(vcpu); > + struct hv_enlightenments *hve = > + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; Small nitpick: Can we use this as an opportunity to rename the 'reserved_sw' to \ 'hv_enlightenments' or something, because that is what it is? Also the reserved_sw is an array, which is confusing, since from first look, it looks like we have a pointer dereference here. > + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); > + > + if (!hv_vcpu) > + return; > + > + hv_vcpu->nested.pa_page_gpa = hve->partition_assist_page; > + hv_vcpu->nested.vm_id = hve->hv_vm_id; > + hv_vcpu->nested.vp_id = hve->hv_vp_id; > +} > + > #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > index bed5e1692cef..2d1a76343404 100644 > --- a/arch/x86/kvm/svm/nested.c > +++ b/arch/x86/kvm/svm/nested.c > @@ -826,6 +826,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) > > svm->nested.nested_run_pending = 1; > > + nested_svm_hv_update_vm_vp_ids(vcpu); > + > if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) > goto out_exit_err; > That won't work after migration, since this won't be called if we migrate with nested guest running. I think that nested_svm_hv_update_vm_vp_ids should be called from enter_svm_guest_mode. Best regards, Maxim Levitsky