On Wed, Nov 15, 2023 at 02:49:56PM +0800, Yuan Yao <yuan.yao@xxxxxxxxxxxxxxx> wrote: > On Tue, Nov 07, 2023 at 06:56:37AM -0800, isaku.yamahata@xxxxxxxxx wrote: > > From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > > > For vcpu migration, in the case of VMX, VMCS is flushed on the source pcpu, > > and load it on the target pcpu. There are corresponding TDX SEAMCALL APIs, > > call them on vcpu migration. The logic is mostly same as VMX except the > > TDX SEAMCALLs are used. > > > > When shutting down the machine, (VMX or TDX) vcpus needs to be shutdown on > > each pcpu. Do the similar for TDX with TDX SEAMCALL APIs. > > > > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > --- > > arch/x86/kvm/vmx/main.c | 32 ++++++- > > arch/x86/kvm/vmx/tdx.c | 190 ++++++++++++++++++++++++++++++++++++- > > arch/x86/kvm/vmx/tdx.h | 2 + > > arch/x86/kvm/vmx/x86_ops.h | 4 + > > 4 files changed, 221 insertions(+), 7 deletions(-) > > > > diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c > > index e7c570686736..8b109d0fe764 100644 > > --- a/arch/x86/kvm/vmx/main.c > > +++ b/arch/x86/kvm/vmx/main.c > > @@ -44,6 +44,14 @@ static int vt_hardware_enable(void) > > return ret; > > } > > > ...... > > -void tdx_mmu_release_hkid(struct kvm *kvm) > > +static int __tdx_mmu_release_hkid(struct kvm *kvm) > > { > > bool packages_allocated, targets_allocated; > > struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); > > cpumask_var_t packages, targets; > > + struct kvm_vcpu *vcpu; > > + unsigned long j; > > + int i, ret = 0; > > u64 err; > > - int i; > > > > if (!is_hkid_assigned(kvm_tdx)) > > - return; > > + return 0; > > > > if (!is_td_created(kvm_tdx)) { > > tdx_hkid_free(kvm_tdx); > > - return; > > + return 0; > > } > > > > packages_allocated = zalloc_cpumask_var(&packages, GFP_KERNEL); > > targets_allocated = zalloc_cpumask_var(&targets, GFP_KERNEL); > > cpus_read_lock(); > > > > + kvm_for_each_vcpu(j, vcpu, kvm) > > + tdx_flush_vp_on_cpu(vcpu); > > + > > /* > > * We can destroy multiple the guest TDs simultaneously. Prevent > > * tdh_phymem_cache_wb from returning TDX_BUSY by serialization. > > @@ -236,6 +361,19 @@ void tdx_mmu_release_hkid(struct kvm *kvm) > > */ > > write_lock(&kvm->mmu_lock); > > > > + err = tdh_mng_vpflushdone(kvm_tdx->tdr_pa); > > + if (err == TDX_FLUSHVP_NOT_DONE) { > > Not sure IIUC, The __tdx_mmu_release_hkid() is called in MMU release > callback, which means all threads of the process have dropped mm by > do_exit() so they won't run kvm code anymore, and tdx_flush_vp_on_cpu() > is called for each pcpu they run last time, so will this error really > happen ? KVM_TDX_RELEASE_VM calls the function too. Maybe this check should be introduced with the patch for KVM_TDX_RELEASE_VM. -- Isaku Yamahata <isaku.yamahata@xxxxxxxxxxxxxxx>