On 12/8/21 01:15, Sean Christopherson wrote:
@@ -832,8 +832,14 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
if (memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs))) {
memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs));
kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR);
- /* Ensure the dirty PDPTEs to be loaded. */
- kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
+ /*
+ * Ensure the dirty PDPTEs to be loaded for VMX with EPT
+ * enabled or pae_root to be reconstructed for shadow paging.
+ */
+ if (tdp_enabled)
+ kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
+ else
+ kvm_mmu_free_roots(vcpu, vcpu->arch.mmu, KVM_MMU_ROOT_CURRENT);
Shouldn't matter since it's legacy shadow paging, but @mmu should be used instead
of vcpu->arch.mmuvcpu->arch.mmu.
In kvm/next actually there's no mmu parameter to load_pdptrs, so it's
okay to keep vcpu->arch.mmu.
To avoid a dependency on the previous patch, I think it makes sense to have this be:
if (!tdp_enabled && memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs)))
kvm_mmu_free_roots(vcpu, mmu, KVM_MMU_ROOT_CURRENT);
before the memcpy().
Then we can decide independently if skipping the KVM_REQ_LOAD_MMU_PGD if the
PDPTRs are unchanged with respect to the MMU is safe.
Do you disagree that there's already an invariant that the PDPTRs can
only be dirty if KVM_REQ_LOAD_MMU_PGD---and therefore a previous change
to the PDPTRs would have triggered KVM_REQ_LOAD_MMU_PGD? This is
opposed to the guest TLB flush due to MOV CR3; that one has to be done
unconditionally for PAE paging, and it is handled separately within
kvm_set_cr3.
Paolo