On Sat, Feb 22, 2025, Rick P Edgecombe wrote: > On Thu, 2025-02-20 at 17:08 -0800, Sean Christopherson wrote: > > Argh, after digging more, this isn't actually true. The separate "unload MMUs" > > code is leftover from when KVM rather stupidly tried to free all MMU pages when > > freeing a vCPU. > > > > Commit 7b53aa565084 ("KVM: Fix vcpu freeing for guest smp") "fixed" things by > > unloading MMUs before destroying vCPUs, but the underlying problem was trying to > > free _all_ MMU pages when destroying a single vCPU. That eventually got fixed > > for good (haven't checked when), but the separate MMU unloading never got cleaned > > up. > > > > So, scratch the mmu_destroy() idea. But I still want/need to move vCPU destruction > > before vm_destroy. > > > > Now that kvm_arch_pre_destroy_vm() is a thing, one idea would be to add > > kvm_x86_ops.pre_destroy_vm(), which would pair decently well with the existing > > call to kvm_mmu_pre_destroy_vm(). > > So: > kvm_x86_call(vm_destroy)(kvm); -> tdx_mmu_release_hkid() > kvm_destroy_vcpus(kvm); -> tdx_vcpu_free() -> reclaim > static_call_cond(kvm_x86_vm_free)(kvm); -> reclaim > > To: > (pre_destroy_vm)() -> tdx_mmu_release_hkid() > kvm_destroy_vcpus(kvm); -> reclaim > kvm_x86_call(vm_destroy)(kvm); -> nothing > static_call_cond(kvm_x86_vm_free)(kvm); -> reclaim I was thinking this last one could go away, and TDX could reclaim at vm_destroy? Or is that not possible because it absolutely must come dead last? > I'm not seeing a problem. We can follow up with a real check once you post the > patches. Ya. That ain't happening today. Got demolished but KVM-Unit-Tests. :-/ > I'm not 100% confident on the shape of the proposal. But in general if > we can add more callbacks it seems likely that we can reproduce the current > order. At this stage it seems safer to do that then anything more clever