In order to be able to have a single kernel for supporting even huge numbers of vcpus per guest some arrays should be sized dynamically. The easiest way to do that is to add boot parameters for the maximum number of vcpus and to calculate the maximum vcpu-id from that using either the host topology or a topology hint via another boot parameter. This patch series is doing that for x86. The same scheme can be easily adapted to other architectures, but I don't want to do that in the first iteration. In the long term I'd suggest to have a per-guest setting of the two parameters allowing to spare some memory for smaller guests. OTOH this would require new ioctl()s and respective qemu modifications, so I let those away for now. I've tested the series not to break normal guest operation and the new parameters to be effective on x86. For Arm64 I did a compile test only. Changes in V2: - removed old patch 1, as already applied - patch 1 (old patch 2) only for reference, as the patch is already in the kvm tree - switch patch 2 (old patch 3) to calculate vcpu-id - added patch 4 Juergen Gross (6): x86/kvm: remove non-x86 stuff from arch/x86/kvm/ioapic.h x86/kvm: add boot parameter for adding vcpu-id bits x86/kvm: introduce per cpu vcpu masks kvm: use kvfree() in kvm_arch_free_vm() kvm: allocate vcpu pointer array separately x86/kvm: add boot parameter for setting max number of vcpus per guest .../admin-guide/kernel-parameters.txt | 25 ++++++ arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 23 ++++-- arch/x86/include/asm/kvm_host.h | 26 +++++-- arch/x86/kvm/hyperv.c | 25 ++++-- arch/x86/kvm/ioapic.c | 12 ++- arch/x86/kvm/ioapic.h | 8 +- arch/x86/kvm/irq_comm.c | 9 ++- arch/x86/kvm/x86.c | 78 ++++++++++++++++++- include/linux/kvm_host.h | 26 ++++++- 10 files changed, 198 insertions(+), 35 deletions(-) -- 2.26.2 _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm