On 07/08/2017 21:11, Brijesh Singh wrote: > Commit: 1472775 (kvm: svm: Add support for additional SVM NPF error codes) > added new error code to aid nested page fault handling. The commit > unprotect (kvm_mmu_unprotect_page) the page when we get a NFP due to > guest page table walk where the page was marked RO. > > Paolo highlighted a use case, where an L0->L2 shadow nested page table > is marked read-only, in particular when a page is read only in L1's nested > page table. If such a page is accessed by L2 while walking page tables > it can cause a nested page fault (page table walks are write accessed). > However, after kvm_mmu_unprotect_page we may get another page fault, and > again in an endless stream. > > To cover this use case, we qualify the new error_code check with > vcpu->arch.mmu_direct_map so that the error_code check would run on L1 > guest, and not the L2 guest. This would restrict it avoid hitting the above > use case. > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: "Radim Krčmář" <rkrcmar@xxxxxxxxxx> > Cc: Thomas Lendacky <thomas.lendacky@xxxxxxx> > Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx> > --- > > See http://marc.info/?l=kvm&m=150153155519373&w=2 for detail discussion on the use case and code flow. > > arch/x86/kvm/mmu.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 9b1dd11..4aaa4aa 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -4839,7 +4839,8 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code, > * Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used > * in PFERR_NEXT_GUEST_PAGE) > */ > - if (error_code == PFERR_NESTED_GUEST_PAGE) { > + if (vcpu->arch.mmu.direct_map && > + (error_code == PFERR_NESTED_GUEST_PAGE)) { > kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2)); > return 1; > } > Thanks, queued for 4.14. Paolo