On Mon, Jun 5, 2023 at 11:12 AM Jim Mattson <jmattson@xxxxxxxxxx> wrote: > > On Mon, Jun 5, 2023 at 10:42 AM Mingwei Zhang <mizhang@xxxxxxxxxx> wrote: > > > > On Mon, Jun 5, 2023 at 9:55 AM Jim Mattson <jmattson@xxxxxxxxxx> wrote: > > > > > > On Sun, Jun 4, 2023 at 5:43 PM Mingwei Zhang <mizhang@xxxxxxxxxx> wrote: > > > > > > > > Remove KVM MMU write lock when accessing indirect_shadow_pages counter when > > > > page role is direct because this counter value is used as a coarse-grained > > > > heuristics to check if there is nested guest active. Racing with this > > > > heuristics without mmu lock will be harmless because the corresponding > > > > indirect shadow sptes for the GPA will either be zapped by this thread or > > > > some other thread who has previously zapped all indirect shadow pages and > > > > makes the value to 0. > > > > > > > > Because of that, remove the KVM MMU write lock pair to potentially reduce > > > > the lock contension and improve the performance of nested VM. In addition > > > > opportunistically change the comment of 'direct mmu' to make the > > > > description consistent with other places. > > > > > > > > Reported-by: Jim Mattson <jmattson@xxxxxxxxxx> > > > > Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx> > > > > --- > > > > arch/x86/kvm/x86.c | 10 ++-------- > > > > 1 file changed, 2 insertions(+), 8 deletions(-) > > > > > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > > > index 5ad55ef71433..97cfa5a00ff2 100644 > > > > --- a/arch/x86/kvm/x86.c > > > > +++ b/arch/x86/kvm/x86.c > > > > @@ -8585,15 +8585,9 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > > > > > > > > kvm_release_pfn_clean(pfn); > > > > > > > > - /* The instructions are well-emulated on direct mmu. */ > > > > + /* The instructions are well-emulated on Direct MMUs. */ > > > > if (vcpu->arch.mmu->root_role.direct) { > > > > - unsigned int indirect_shadow_pages; > > > > - > > > > - write_lock(&vcpu->kvm->mmu_lock); > > > > - indirect_shadow_pages = vcpu->kvm->arch.indirect_shadow_pages; > > > > - write_unlock(&vcpu->kvm->mmu_lock); > > > > - > > > > - if (indirect_shadow_pages) > > > > + if (READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) > > > > > > I don't understand the need for READ_ONCE() here. That implies that > > > there is something tricky going on, and I don't think that's the case. > > > > READ_ONCE() is just telling the compiler not to remove the read. Since > > this is reading a global variable, the compiler might just read a > > previous copy if the value has already been read into a local > > variable. But that is not the case here... > > Not a global variable, actually, but that's not relevant. What would > be wrong with using a previously read copy? Nothing will be wrong I think since this is already just a heuristic. > > We don't always wrap reads in READ_ONCE(). It's actually pretty rare. > So, there should be an explicit and meaningful reason. > > > Note I see there is another READ_ONCE for > > kvm->arch.indirect_shadow_pages, so I am reusing the same thing. > > That's not a good reason. "If all of your friends jumped off a cliff, > would you?" :) > > > I did check the reordering issue but it should be fine because when > > 'we' see indirect_shadow_pages as 0, the shadow pages must have > > already been zapped. Not only because of the locking, but also the > > program order in __kvm_mmu_prepare_zap_page() shows that it will zap > > shadow pages first before updating the stats. yeah, I forgot to mention that removing READ_ONCE() is ok for me.