On Mon, Feb 6, 2023 at 2:06 PM Ben Gardon <bgardon@xxxxxxxxxx> wrote: > > On Fri, Feb 3, 2023 at 11:28 AM Vipin Sharma <vipinsh@xxxxxxxxxx> wrote: > > diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h > > index 30a52e5e68de..21046b34f94e 100644 > > --- a/arch/x86/kvm/mmu/tdp_iter.h > > +++ b/arch/x86/kvm/mmu/tdp_iter.h > > @@ -121,4 +121,17 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, > > void tdp_iter_next(struct tdp_iter *iter); > > void tdp_iter_restart(struct tdp_iter *iter); > > > > +static inline u64 kvm_tdp_mmu_clear_spte_bit(struct tdp_iter *iter, u64 mask) > > +{ > > + atomic64_t *sptep; > > + > > + if (kvm_tdp_mmu_spte_has_volatile_bits(iter->old_spte, iter->level)) { > > + sptep = (atomic64_t *)rcu_dereference(iter->sptep); > > + return (u64)atomic64_fetch_and(~mask, sptep); > > + } > > + > > + __kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte & ~mask); > > + return iter->old_spte; > > +} > > + > > If you do end up sending another version of the series and feel like > breaking up this patch, you could probably split this part out since > the change to how we set the SPTE and how we handle that change are > somewhat independent. I like the switch to atomic64_fetch_and though. > No idea if it's faster, but I would believe it could be. David explained in his email why it is not independent. > > Totally optional, but there's currently no validation on the mask. > Maybe we're only calling this in one place, but it might be worth > clarifying the limits (if any) on what bits can be set in the mask. I > don't think there necessarily need to be limits as of this commit, but > the handling around this function where it's called here would > obviously not be sufficient if the mask were -1UL or something. > I cannot think of any specific mask to be useful here. Let us keep it as it is, we can revisit this API if there is a need to add a mask in future. If someone sends -1UL then it will be on them on how they are using the API.