On Tue, Jul 22, 2014 at 05:55:20AM +0800, Xiao Guangrong wrote: > > On Jul 10, 2014, at 3:12 AM, mtosatti@xxxxxxxxxx wrote: > > > Reload remote vcpus MMU from GET_DIRTY_LOG codepath, before > > deleting a pinned spte. > > > > Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx> > > > > --- > > arch/x86/kvm/mmu.c | 29 +++++++++++++++++++++++------ > > 1 file changed, 23 insertions(+), 6 deletions(-) > > > > Index: kvm.pinned-sptes/arch/x86/kvm/mmu.c > > =================================================================== > > --- kvm.pinned-sptes.orig/arch/x86/kvm/mmu.c 2014-07-09 11:23:59.290744490 -0300 > > +++ kvm.pinned-sptes/arch/x86/kvm/mmu.c 2014-07-09 11:24:58.449632435 -0300 > > @@ -1208,7 +1208,8 @@ > > * > > * Return true if tlb need be flushed. > > */ > > -static bool spte_write_protect(struct kvm *kvm, u64 *sptep, bool pt_protect) > > +static bool spte_write_protect(struct kvm *kvm, u64 *sptep, bool pt_protect, > > + bool skip_pinned) > > { > > u64 spte = *sptep; > > > > @@ -1218,6 +1219,22 @@ > > > > rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); > > > > + if (is_pinned_spte(spte)) { > > + /* keep pinned spte intact, mark page dirty again */ > > + if (skip_pinned) { > > + struct kvm_mmu_page *sp; > > + gfn_t gfn; > > + > > + sp = page_header(__pa(sptep)); > > + gfn = kvm_mmu_page_get_gfn(sp, sptep - sp->spt); > > + > > + mark_page_dirty(kvm, gfn); > > + return false; > > + } else > > + mmu_reload_pinned_vcpus(kvm); > > + } > > + > > + > > if (pt_protect) > > spte &= ~SPTE_MMU_WRITEABLE; > > spte = spte & ~PT_WRITABLE_MASK; > > This is also a window between marking spte readonly and re-ping… > IIUC, I think all spte spte can not be zapped and write-protected at any time It is safe because mmu_lock is held by kvm_mmu_slot_remove_write_access ? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html