----- Original Message ----- > From: "Junaid Shahid" <junaids@xxxxxxxxxx> > To: "Paolo Bonzini" <pbonzini@xxxxxxxxxx> > Cc: kvm@xxxxxxxxxxxxxxx, andreslc@xxxxxxxxxx, pfeiner@xxxxxxxxxx, "guangrong xiao" <guangrong.xiao@xxxxxxxxxxxxxxx> > Sent: Wednesday, December 14, 2016 11:36:37 PM > Subject: Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits. > > On Wednesday, December 14, 2016 05:28:48 PM Paolo Bonzini wrote: > > > > Just a few changes bordering on aesthetics... > > > > Please review and let me know if you agree: > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > index 6ba62200530a..6b5d8ff66026 100644 > > --- a/arch/x86/kvm/mmu.c > > +++ b/arch/x86/kvm/mmu.c > > @@ -213,8 +213,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask) > > > > static inline bool is_access_track_spte(u64 spte) > > { > > - return shadow_acc_track_mask != 0 && > > - (spte & shadow_acc_track_mask) == shadow_acc_track_value; > > + /* Always false if shadow_acc_track_mask is zero. */ > > + return (spte & shadow_acc_track_mask) == shadow_acc_track_value; > > } > > This looks good. > > > > > /* > > @@ -526,23 +526,24 @@ static bool spte_can_locklessly_be_made_writable(u64 > > spte) > > > > static bool spte_has_volatile_bits(u64 spte) > > { > > + if (is_shadow_present_pte(spte)) > > + return false; > > + > > This should be !is_shadow_present_pte Caught me sending wrong patch. :( > > /* > > * Always atomically update spte if it can be updated > > * out of mmu-lock, it can ensure dirty bit is not lost, > > * also, it can help us to get a stable is_writable_pte() > > * to ensure tlb flush is not missed. > > */ > > - if (spte_can_locklessly_be_made_writable(spte)) > > + if (spte_can_locklessly_be_made_writable(spte) || > > + is_access_track_spte(spte)) > > return true; > > > > - if (!is_shadow_present_pte(spte)) > > - return false; > > - > > - if (!shadow_accessed_mask) > > - return is_access_track_spte(spte); > > + if ((spte & shadow_accessed_mask) == 0 || > > + (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0)) > > + return true; > > We also need a shadow_accessed_mask != 0 check here, otherwise it will always > return true when shadow_accessed_mask is 0. ... but this is not the wrong patch, it's a genuine mistake. Thanks! Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html