On Thu, Jan 19, 2023, Huang, Kai wrote: > On Thu, 2023-01-19 at 15:37 +0000, Sean Christopherson wrote: > > On Thu, Jan 19, 2023, Huang, Kai wrote: > > > On Tue, 2023-01-17 at 21:01 +0000, Sean Christopherson wrote: > > > > On Tue, Jan 17, 2023, Sean Christopherson wrote: > > > > > On Tue, Jan 17, 2023, Zhi Wang wrote: > > > > Oh, the other important piece I forgot to mention is that dropping mmu_lock deep > > > > in KVM's MMU in order to wait isn't always an option. Most flows would play nice > > > > with dropping mmu_lock and sleeping, but some paths, e.g. from the mmu_notifier, > > > > (conditionally) disallow sleeping. > > > > > > Could we do something similar to tdp_mmu_iter_cond_resched() but not simple busy > > > retrying "X times", at least at those paths that can release mmu_lock()? > > > > That's effectively what happens by unwinding up the stak with an error code. > > Eventually the page fault handler will get the error and retry the guest. > > > > > Basically we treat TDX_OPERAND_BUSY as seamcall_needbreak(), similar to > > > rwlock_needbreak(). I haven't thought about details though. > > > > I am strongly opposed to that approach. I do not want to pollute KVM's MMU code > > with a bunch of retry logic and error handling just because the TDX module is > > ultra paranoid and hostile to hypervisors. > > Right. But IIUC there's legal cases that SEPT SEAMCALL can return BUSY due to > multiple threads trying to read/modify SEPT simultaneously in case of TDP MMU. > For instance, parallel page faults on different vcpus on private pages. I > believe this is the main reason to retry. Um, crud. I think there's a bigger issue. KVM always operates on its copy of the S-EPT tables and assumes the the real S-EPT tables will always be synchronized with KVM's mirror. That assumption doesn't hold true without serializing SEAMCALLs in some way. E.g. if a SPTE is zapped and mapped at the same time, we can end up with: vCPU0 vCPU1 ===== ===== mirror[x] = xyz old_spte = mirror[x] mirror[x] = REMOVED_SPTE sept[x] = REMOVED_SPTE sept[x] = xyz In other words, when mmu_lock is held for read, KVM relies on atomic SPTE updates. With the mirror=>s-ept scheme, updates are no longer atomic and everything falls apart. Gracefully retrying only papers over the visible failures, the really problematic scenarios are where multiple updates race and _don't_ trigger conflicts in the TDX module. > We previously used spinlock around the SEAMCALLs to avoid, but looks that is > not preferred. That doesn't address the race above either. And even if it did, serializing all S-EPT SEAMCALLs for a VM is not an option, at least not in the long term. The least invasive idea I have is expand the TDP MMU's concept of "frozen" SPTEs and freeze (a.k.a. lock) the SPTE (KVM's mirror) until the corresponding S-EPT update completes. The other idea is to scrap the mirror concept entirely, though I gotta imagine that would provide pretty awful performance. diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 0d8deefee66c..bcb398e71475 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -198,9 +198,9 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask; /* Removed SPTEs must not be misconstrued as shadow present PTEs. */ static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK)); -static inline bool is_removed_spte(u64 spte) +static inline bool is_frozen_spte(u64 spte) { - return spte == REMOVED_SPTE; + return spte == REMOVED_SPTE || spte & FROZEN_SPTE; } /* Get an SPTE's index into its parent's page table (and the spt array). */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bba33aea0fb0..7f34eccadf98 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -651,6 +651,9 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm, lockdep_assert_held_read(&kvm->mmu_lock); + if (<is TDX> && new_spte != REMOVED_SPTE) + new_spte |= FROZEN_SPTE; + /* * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and * does not hold the mmu_lock. @@ -662,6 +665,9 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm, new_spte, iter->level, true); handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); + if (<is TDX> && new_spte != REMOVED_SPTE) + __kvm_tdp_mmu_write_spte(iter->sptep, new_spte); + return 0; }