On Sun, Jan 23, 2022 at 5:36 AM <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > > > The patch below does not apply to the 5.10-stable tree. > If someone wants it applied there, or to any other stable or longterm > tree, then please email the backport, including the original git commit > id to <stable@xxxxxxxxxxxxxxx>. I'll take a look and send a backport to 5.10. > > thanks, > > greg k-h > > ------------------ original commit in Linus's tree ------------------ > > From 7c8a4742c4abe205ec9daf416c9d42fd6b406e8e Mon Sep 17 00:00:00 2001 > From: David Matlack <dmatlack@xxxxxxxxxx> > Date: Thu, 13 Jan 2022 23:30:17 +0000 > Subject: [PATCH] KVM: x86/mmu: Fix write-protection of PTs mapped by the TDP > MMU > > When the TDP MMU is write-protection GFNs for page table protection (as > opposed to for dirty logging, or due to the HVA not being writable), it > checks if the SPTE is already write-protected and if so skips modifying > the SPTE and the TLB flush. > > This behavior is incorrect because it fails to check if the SPTE > is write-protected for page table protection, i.e. fails to check > that MMU-writable is '0'. If the SPTE was write-protected for dirty > logging but not page table protection, the SPTE could locklessly be made > writable, and vCPUs could still be running with writable mappings cached > in their TLB. > > Fix this by only skipping setting the SPTE if the SPTE is already > write-protected *and* MMU-writable is already clear. Technically, > checking only MMU-writable would suffice; a SPTE cannot be writable > without MMU-writable being set. But check both to be paranoid and > because it arguably yields more readable code. > > Fixes: 46044f72c382 ("kvm: x86/mmu: Support write protection for nesting in tdp MMU") > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx> > Message-Id: <20220113233020.3986005-2-dmatlack@xxxxxxxxxx> > Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx> > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 7b1bc816b7c3..bc9e3553fba2 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -1442,12 +1442,12 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, > !is_last_spte(iter.old_spte, iter.level)) > continue; > > - if (!is_writable_pte(iter.old_spte)) > - break; > - > new_spte = iter.old_spte & > ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); > > + if (new_spte == iter.old_spte) > + break; > + > tdp_mmu_set_spte(kvm, &iter, new_spte); > spte_set = true; > } >