On Fri, Apr 12, 2024 at 03:46:05PM -0700, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > On Fri, Apr 12, 2024, Isaku Yamahata wrote: > > On Fri, Apr 12, 2024 at 09:15:29AM -0700, Reinette Chatre <reinette.chatre@xxxxxxxxx> wrote: > > > > +void tdx_mmu_release_hkid(struct kvm *kvm) > > > > +{ > > > > + while (__tdx_mmu_release_hkid(kvm) == -EBUSY) > > > > + ; > > > > } > > > > > > As I understand, __tdx_mmu_release_hkid() returns -EBUSY > > > after TDH.VP.FLUSH has been sent for every vCPU followed by > > > TDH.MNG.VPFLUSHDONE, which returns TDX_FLUSHVP_NOT_DONE. > > > > > > Considering earlier comment that a retry of TDH.VP.FLUSH is not > > > needed, why is this while() loop here that sends the > > > TDH.VP.FLUSH again to all vCPUs instead of just a loop within > > > __tdx_mmu_release_hkid() to _just_ resend TDH.MNG.VPFLUSHDONE? > > > > > > Could it be possible for a vCPU to appear during this time, thus > > > be missed in one TDH.VP.FLUSH cycle, to require a new cycle of > > > TDH.VP.FLUSH? > > > > Yes. There is a race between closing KVM vCPU fd and MMU notifier release hook. > > When KVM vCPU fd is closed, vCPU context can be loaded again. > > But why is _loading_ a vCPU context problematic? It's nothing problematic. It becomes a bit harder to understand why tdx_mmu_release_hkid() issues IPI on each loop. I think it's reasonable to make the normal path easy and to complicate/penalize the destruction path. Probably I should've added comment on the function. -- Isaku Yamahata <isaku.yamahata@xxxxxxxxx>