On 03/21/2013 04:14 AM, Marcelo Tosatti wrote: > > kvm_mmu_calculate_mmu_pages numbers, > > maximum number of shadow pages = 2% of mapped guest pages > > Does not make sense for TDP guests where mapping all of guest > memory with 4k pages cannot exceed "mapped guest pages / 512" > (not counting root pages). > > Allow that maximum for TDP, forcing the guest to recycle otherwise. > > Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx> > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 956ca35..a9694a8d7 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -4293,7 +4293,7 @@ nomem: > unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) > { > unsigned int nr_mmu_pages; > - unsigned int nr_pages = 0; > + unsigned int i, nr_pages = 0; > struct kvm_memslots *slots; > struct kvm_memory_slot *memslot; > > @@ -4302,7 +4302,19 @@ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) > kvm_for_each_memslot(memslot, slots) > nr_pages += memslot->npages; > > - nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000; > + if (tdp_enabled) { > + /* one root page */ > + nr_mmu_pages = 1; > + /* nr_pages / (512^i) per level, due to > + * guest RAM map being linear */ > + for (i = 1; i < 4; i++) { > + int nr_pages_round = nr_pages + (1 << (9*i)); > + nr_mmu_pages += nr_pages_round >> (9*i); > + } Marcelo, Can it work if nested guest is used? Did you see any problem in practice (direct guest uses more memory than your calculation)? And mmio also can build some page table that looks like not considered in this patch. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html