On Wed, Jul 03, 2013 at 04:50:36PM +0800, Xiao Guangrong wrote: > On 07/03/2013 04:39 PM, Xiao Guangrong wrote: > > On 07/03/2013 04:28 PM, Paolo Bonzini wrote: > >> Il 03/07/2013 10:18, Takuya Yoshikawa ha scritto: > >>> Since kvm_arch_prepare_memory_region() is called right after installing > >>> the slot marked invalid, wraparound checking should be there to avoid > >>> zapping mmio sptes when mmio generation is still MMIO_MAX_GEN - 1. > >>> > >>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@xxxxxxxxxxxxx> > >>> --- > >>> This seems to be the simplest solution for fixing the off-by-one issue > >>> we discussed before. > >>> > >>> arch/x86/kvm/mmu.c | 5 +---- > >>> arch/x86/kvm/x86.c | 7 +++++++ > >>> 2 files changed, 8 insertions(+), 4 deletions(-) > >>> > >>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > >>> index 0d094da..bf7af1e 100644 > >>> --- a/arch/x86/kvm/mmu.c > >>> +++ b/arch/x86/kvm/mmu.c > >>> @@ -4383,11 +4383,8 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm) > >>> /* > >>> * The very rare case: if the generation-number is round, > >>> * zap all shadow pages. > >>> - * > >>> - * The max value is MMIO_MAX_GEN - 1 since it is not called > >>> - * when mark memslot invalid. > >>> */ > >>> - if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1))) { > >>> + if (unlikely(kvm_current_mmio_generation(kvm) >= MMIO_MAX_GEN)) { > >>> printk_ratelimited(KERN_INFO "kvm: zapping shadow pages for mmio generation wraparound\n"); > >>> kvm_mmu_invalidate_zap_all_pages(kvm); > >>> } > >>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > >>> index 7d71c0f..9ddd4ff 100644 > >>> --- a/arch/x86/kvm/x86.c > >>> +++ b/arch/x86/kvm/x86.c > >>> @@ -7046,6 +7046,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > >>> memslot->userspace_addr = userspace_addr; > >>> } > >>> > >>> + /* > >>> + * In these cases, slots->generation has been increased for marking the > >>> + * slot invalid, so we need wraparound checking here. > >>> + */ > >>> + if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) > >>> + kvm_mmu_invalidate_mmio_sptes(kvm); > >>> + > >>> return 0; > >>> } > >>> > >>> > >> > >> Applied, thanks. > > > > Please wait a while. I can not understand it very clearly. > > > > This conditional check will cause caching a overflow value into mmio spte. > > The simple case is that kvm adds new slots for many times, the mmio-gen is easily > > more than MMIO_MAX_GEN. > > > > Actually, the double zapping can be avoided by moving kvm_mmu_invalidate_mmio_sptes to > the end of install_new_memslots(). > Exactly. Why should we hide it in obscure functions? -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html