2015-10-12 14:09+0200, Paolo Bonzini: > Otherwise, two copies (one of them never used and thus bogus) are > allocated for the regular and SMM address spaces. This breaks > SMM with EPT but without unrestricted guest support, because the > SMM copy of the identity page map is all zeros. (Have you found out why EPT+unrestricted didn't use the alternative SMM mapping as well?) > By moving the allocation to the caller we also remove the last > vestiges of kernel-allocated memory regions (not accessible anymore > in userspace since commit b74a07beed0e, "KVM: Remove kernel-allocated > memory regions", 2010-06-21); that is a nice bonus. > > Reported-by: Alexandre DERUMIER <aderumier@xxxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > Fixes: 9da0e4d5ac969909f6b435ce28ea28135a9cbd69 > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > --- vm_mmap() leaks if __kvm_set_memory_region() fails. It's nothing new and following process termination should take care of it, Reviewed-by: Radim Krčmář <rkrcmar@xxxxxxxxxx> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > @@ -7717,23 +7717,53 @@ void kvm_arch_sync_events(struct kvm *kvm) > int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size) > { > int i, r; > + u64 hva; > + struct kvm_memslots *slots = kvm_memslots(kvm); > + struct kvm_memory_slot *slot, old; | [...] > + slot = &slots->memslots[slots->id_to_index[id]]; This seems better written as slot = id_to_memslot(slots, id); (Made me remember that I want to refactor the memslot API ...) | [...] > + } else { > + if (!slot->npages) > + return 0; > + > + hva = 0; > + } > + > + old = *slot; (Assignment could be in the 'else' == !size branch, GCC would have fun.) | [...] > + if (!size) { > + r = vm_munmap(old.userspace_addr, old.npages * PAGE_SIZE); > + WARN_ON(r < 0); > + } -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html