On Tue, Jun 07, 2011 at 09:00:30PM +0800, Xiao Guangrong wrote: > If the page fault is caused by mmio, we can cache the mmio info, later, we do > not need to walk guest page table and quickly know it is a mmio fault while we > emulate the mmio instruction > > Signed-off-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxx> > --- > arch/x86/include/asm/kvm_host.h | 5 +++ > arch/x86/kvm/mmu.c | 21 +++++---------- > arch/x86/kvm/mmu.h | 23 +++++++++++++++++ > arch/x86/kvm/paging_tmpl.h | 21 ++++++++++----- > arch/x86/kvm/x86.c | 52 ++++++++++++++++++++++++++++++-------- > arch/x86/kvm/x86.h | 36 +++++++++++++++++++++++++++ > 6 files changed, 126 insertions(+), 32 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index d167039..326af42 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -414,6 +414,11 @@ struct kvm_vcpu_arch { > u64 mcg_ctl; > u64 *mce_banks; > > + /* Cache MMIO info */ > + u64 mmio_gva; > + unsigned access; > + gfn_t mmio_gfn; > + > /* used for guest single stepping over the given code position */ > unsigned long singlestep_rip; > Why you're not implementing the original idea to cache the MMIO attribute of an address into the spte? That solution is wider reaching than a one-entry cache, and was proposed to overcome large number of memslots. If the access pattern switches between different addresses this one solution is doomed. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html