On Mon, Feb 6, 2023 at 2:06 PM Ben Gardon <bgardon@xxxxxxxxxx> wrote: > > On Fri, Feb 3, 2023 at 11:28 AM Vipin Sharma <vipinsh@xxxxxxxxxx> wrote: > > > > +static inline u64 kvm_tdp_mmu_clear_spte_bit(struct tdp_iter *iter, u64 mask) > > +{ > > + atomic64_t *sptep; > > + > > + if (kvm_tdp_mmu_spte_has_volatile_bits(iter->old_spte, iter->level)) { > > + sptep = (atomic64_t *)rcu_dereference(iter->sptep); > > + return (u64)atomic64_fetch_and(~mask, sptep); > > + } > > + > > + __kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte & ~mask); > > + return iter->old_spte; > > +} > > + > > If you do end up sending another version of the series and feel like > breaking up this patch, you could probably split this part out since > the change to how we set the SPTE and how we handle that change are > somewhat independent. I like the switch to atomic64_fetch_and though. > No idea if it's faster, but I would believe it could be. The changes are actually dependent. Using the atomic-AND makes it impossible for KVM to clear a volatile bit that it wasn't intending to clear, and that is what enables simplifying the SPTE change propagation. Instead I would split out the opportunistic cleanup of @record_dirty_log to a separate patch, since that's dead-code cleanup and refactoring.