On Wed, Feb 12, 2025 at 2:07 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Tue, Feb 04, 2025, James Houghton wrote: > > diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c > > index 22551e2f1d00..e984b440c0f0 100644 > > --- a/arch/x86/kvm/mmu/spte.c > > +++ b/arch/x86/kvm/mmu/spte.c > > @@ -142,8 +142,14 @@ bool spte_has_volatile_bits(u64 spte) > > return true; > > > > if (spte_ad_enabled(spte)) { > > - if (!(spte & shadow_accessed_mask) || > > - (is_writable_pte(spte) && !(spte & shadow_dirty_mask))) > > + /* > > + * Do not check the Accessed bit. It can be set (by the CPU) > > + * and cleared (by kvm_tdp_mmu_age_spte()) without holding > > When possible, don't reference functions by name in comments. There are situations > where it's unavoidable, e.g. when calling out memory barrier pairs, but for cases > like this, it inevitably leads to stale code. Good point. Thanks. > > > + * the mmu_lock, but when clearing the Accessed bit, we do > > + * not invalidate the TLB, so we can already miss Accessed bit > > No "we" in comments or changelog. Ah my mistake. > > + * updates. > > + */ > > + if (is_writable_pte(spte) && !(spte & shadow_dirty_mask)) > > return true; > > This 100% belongs in a separate prepatory patch. And if it's moved to a separate > patch, then the rename can/should happen at the same time. > > And with the Accessed check gone, and looking forward to the below change, I think > this is the perfect opportunity to streamline the final check to: > > return spte_ad_enabled(spte) && is_writable_pte(spte) && > !(spte & shadow_dirty_mask); LGTM. > No need to send another version, I'll move things around when applying. Thanks! > Also, as discussed off-list I'm 99% certain that with the lockless aging, KVM > must atomically update A/D-disabled SPTEs, as the SPTE can be access-tracked and > restored outside of mmu_lock. E.g. a task that holds mmu_lock and is clearing > the writable bit needs to update it atomically to avoid its change from being > lost. Yeah you're right. This logic applies to A/D-enabled SPTEs too, just that we choose to tolerate failures to update the Accessed bit. But in the case of A/D-disabled SPTEs, we can't do that. So this makes sense. Thanks! > I'll slot this is in: > > -- > From: Sean Christopherson <seanjc@xxxxxxxxxx> > Date: Wed, 12 Feb 2025 12:58:39 -0800 > Subject: [PATCH 03/10] KVM: x86/mmu: Always update A/D-disable SPTEs > atomically > > In anticipation of aging SPTEs outside of mmu_lock, force A/D-disabled > SPTEs to be updated atomically, as aging A/D-disabled SPTEs will mark them > for access-tracking outside of mmu_lock. Coupled with restoring access- > tracked SPTEs in the fast page fault handler, the end result is that > A/D-disable SPTEs will be volatile at all times. > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Reviewed-by: James Houghton <jthoughton@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/spte.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c > index 9663ba867178..0f9f47b4ab0e 100644 > --- a/arch/x86/kvm/mmu/spte.c > +++ b/arch/x86/kvm/mmu/spte.c > @@ -141,8 +141,11 @@ bool spte_needs_atomic_update(u64 spte) > if (!is_writable_pte(spte) && is_mmu_writable_spte(spte)) > return true; > > - /* Access-tracked SPTEs can be restored by KVM's fast page fault handler. */ > - if (is_access_track_spte(spte)) > + /* > + * A/D-disabled SPTEs can be access-tracked by aging, and access-tracked > + * SPTEs can be restored by KVM's fast page fault handler. > + */ > + if (!spte_ad_enabled(spte)) > return true; > > /* > @@ -151,8 +154,7 @@ bool spte_needs_atomic_update(u64 spte) > * invalidate TLBs when aging SPTEs, and so it's safe to clobber the > * Accessed bit (and rare in practice). > */ > - return spte_ad_enabled(spte) && is_writable_pte(spte) && > - !(spte & shadow_dirty_mask); > + return is_writable_pte(spte) && !(spte & shadow_dirty_mask); > } > > bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, > --