Re: [PATCH 09/22] KVM: x86/mmu: Try "unprotect for retry" iff there are indirect SPs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/9/24 21:03, Sean Christopherson wrote:
Try to unprotect shadow pages if and only if indirect_shadow_pages is non-
zero, i.e. iff there is at least one protected such shadow page.  Pre-
checking indirect_shadow_pages avoids taking mmu_lock for write when the
gfn is write-protected by a third party, i.e. not for KVM shadow paging,
and in the *extremely* unlikely case that a different task has already
unprotected the last shadow page.

Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
---
  arch/x86/kvm/mmu/mmu.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 09a42dc1fe5a..358294889baa 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2736,6 +2736,9 @@ bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa)
  	gpa_t gpa = cr2_or_gpa;
  	bool r;
+ if (!vcpu->kvm->arch.indirect_shadow_pages)
+		return false;

indirect_shadow_pages is accessed without a lock, so here please add a note that, while it may be stale, a false negative will only cause KVM to skip the "unprotect and retry" optimization. (This is preexisting in reexecute_instruction() and goes away in patch 18, if I'm pre-reading that part of the series correctly).

Bonus points for opportunistically adding a READ_ONCE() here and in kvm_mmu_track_write().

Paolo

  	if (!vcpu->arch.mmu->root_role.direct)
  		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux