On 03/29/2012 05:27 PM, Xiao Guangrong wrote: > +static bool > +FNAME(fast_pf_fetch_indirect_spte)(struct kvm_vcpu *vcpu, u64 *sptep, > + u64 *new_spte, gfn_t gfn, > + u32 expect_access, u64 spte) > + > +{ > + struct kvm_mmu_page *sp = page_header(__pa(sptep)); > + pt_element_t gpte; > + gpa_t pte_gpa; > + unsigned pte_access; > + > + if (sp->role.direct) > + return fast_pf_fetch_direct_spte(vcpu, sptep, new_spte, > + gfn, expect_access, spte); > + > + pte_gpa = FNAME(get_sp_gpa)(sp); > + pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t); > + > + if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &gpte, > + sizeof(pt_element_t))) > + return false; > + > + if (FNAME(invalid_gpte)(vcpu, gpte)) > + return false; > + > + if (gpte_to_gfn(gpte) != gfn) > + return false; > + Oh, it can not prevent the gpte has been changed, below case will be triggered: VCPU 0 VCPU 1 VCPU 2 gpte = gfn1 + RO + S + NX spte = gfn1's pfn + RO + NX modify gpte: gpte = gfn2 + W + U+ X (due to unsync-sp or wirte emulation before calling kvm_mmu_pte_write()) page fault on gpte: gfn = gfn2 fast page fault: spte = gfn1's pfn + W + U + X (It also can break shadow page table write-protect) OOPS!!! The issue is that gfn does not match with pfn in spte. Maybe we can properly using sp->gfns[] to avoid it: - sp->gfns is freed in the RCU context - sp->gfns[] is initiated to INVALID_GFN - while spte is dropped, set sp->gfns[] to INVALID_GFN On fast page fault path, we can check sp->gfns[] with the gfn which is read from gpte, then do cmpxchg if they are the same. Then, the thing becomes safe since: - we have set the identification in the spte before the check, that means we can perceive the spte change in the later cmpxchg. - check sp->gfns[] can ensure spte is pointing to gfn's pfn. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html