On 04/09/15 12:04, Alexander Graf wrote: > > > On 04.09.15 11:59, Christian Borntraeger wrote: >> Am 04.09.2015 um 11:35 schrieb Thomas Huth: >>> >>> Hi all, >>> >>> now that we get memory hotplugging for the spapr machine on qemu-ppc, >>> too, it seems like we easily can hit the amount of KVM-internal memory >>> slots now ("#define KVM_USER_MEM_SLOTS 32" in >>> arch/powerpc/include/asm/kvm_host.h). For example, start >>> qemu-system-ppc64 with a couple of "-device secondary-vga" and "-m >>> 4G,slots=32,maxmem=40G" and then try to hot-plug all 32 DIMMs ... and >>> you'll see that it aborts way earlier already. >>> >>> The x86 code already increased the amount of KVM_USER_MEM_SLOTS to 509 >>> already (+3 internal slots = 512) ... maybe we should now increase the >>> amount of slots on powerpc, too? Since we don't use internal slots on >>> POWER, would 512 be a good value? Or would less be sufficient, too? >> >> When you are at it, the s390 value should also be increased I guess. > > That constant defines the array size for the memslot array in struct kvm > which in turn again gets allocated by kzalloc, so it's pinned kernel > memory that is physically contiguous. Doing big allocations can turn > into problems during runtime. FWIW, I've just checked sizeof(struct kvm) with the current ppc64 kernel build from master branch, and it is 34144 bytes. So on a system that is using PAGE_SIZE = 64kB, there should be plenty of space left before we're getting into trouble. And even assuming the worst case, that we're on a system which still uses PAGE_SIZE = 4kB, the last page of the 34144 bytes is only filled with 1376 bytes, leaving 2720 bytes free right now. sizeof(struct kvm_memory_slot) is 48 bytes right now on powerpc, and you need two additional bytes per entry for the id_to_index array in struct kvm_memslots, i.e. we need 50 additional bytes per entry on ppc. That means we could increase KVM_USER_MEM_SLOTS by 2720 / 50 = 54 entries without getting into further trouble. I think we should leave some more additional bytes left in that last 4k page of the struct kvm region, ... so what about increasing KVM_USER_MEM_SLOTS to 32 + 48 = 80 now (instead of 32 + 54 = 86) to ease the memslot situation at least a little bit 'till we figured out a really final solution like growable memslots? Thomas -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html