Re: [PATCH 09/22] KVM: x86/mmu: Try "unprotect for retry" iff there are indirect SPs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 14, 2024, Paolo Bonzini wrote:
> On 8/9/24 21:03, Sean Christopherson wrote:
> > Try to unprotect shadow pages if and only if indirect_shadow_pages is non-
> > zero, i.e. iff there is at least one protected such shadow page.  Pre-
> > checking indirect_shadow_pages avoids taking mmu_lock for write when the
> > gfn is write-protected by a third party, i.e. not for KVM shadow paging,
> > and in the *extremely* unlikely case that a different task has already
> > unprotected the last shadow page.
> > 
> > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> > ---
> >   arch/x86/kvm/mmu/mmu.c | 3 +++
> >   1 file changed, 3 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 09a42dc1fe5a..358294889baa 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -2736,6 +2736,9 @@ bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa)
> >   	gpa_t gpa = cr2_or_gpa;
> >   	bool r;
> > +	if (!vcpu->kvm->arch.indirect_shadow_pages)
> > +		return false;
> 
> indirect_shadow_pages is accessed without a lock, so here please add a note
> that, while it may be stale, a false negative will only cause KVM to skip
> the "unprotect and retry" optimization.

Correct, I'll add a comment.

> (This is preexisting in reexecute_instruction() and goes away in patch 18, if
> I'm pre-reading that part of the series correctly).
> 
> Bonus points for opportunistically adding a READ_ONCE() here and in
> kvm_mmu_track_write().

Hmm, right, this one should have a READ_ONCE(), but I don't see any reason to
add one in kvm_mmu_track_write().  If the compiler was crazy and generate multiple
loads between the smp_mb() and write_lock(), _and_ the value transitioned from
1->0, reading '0' on the second go is totally fine because it means the last
shadow page was zapped.  Amusingly, it'd actually be "better" in that it would
avoid unnecessary taking mmu_lock.

Practically speaking, the compiler would have to be broken to generate multiple
loads in the 0->1 case, as that would mean the generated code loaded the value
but ignored the result.  But even if that were to happen, a final read of '1' is
again a-ok.

This code is different because a READ_ONCE() would ensure that indirect_shadow_pages
isn't reloaded for every check.  Though that too would be functionally ok, just
weird.

Obviously the READ_ONCE() would be harmless, but IMO it would be more confusing
than helpful, e.g. would beg the question of why kvm_vcpu_exit_request() doesn't
wrap vcpu->mode with READ_ONCE().  Heh, though arguably vcpu->mode should be
wrapped with READ_ONCE() since it's a helper and could be called multiple times
without any code in between that would guarantee a reload.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux