In order to be able to have a single kernel for supporting even huge numbers of vcpus per guest some arrays should be sized dynamically. The easiest way to do that is to add boot parameters for the maximum number of vcpus and to calculate the maximum vcpu-id from that using either the host topology or a topology hint via another boot parameter. This patch series is doing that for x86. The same scheme can be easily adapted to other architectures, but I don't want to do that in the first iteration. I've tested the series not to break normal guest operation and the new parameters to be effective on x86. This series is based on Marc Zyngier's xarray series: https://lore.kernel.org/kvm/20211105192101.3862492-1-maz@xxxxxxxxxx/ Changes in V2: - removed old patch 1, as already applied - patch 1 (old patch 2) only for reference, as the patch is already in the kvm tree - switch patch 2 (old patch 3) to calculate vcpu-id - added patch 4 Changes in V3: - removed V2 patches 1 and 4, as already applied - removed V2 patch 5, as replaced by Marc Zyngier's xarray series - removed hyperv handling from patch 2 - new patch 3 handling hyperv specifics - comments addressed Juergen Gross (4): x86/kvm: add boot parameter for adding vcpu-id bits x86/kvm: introduce a per cpu vcpu mask x86/kvm: add max number of vcpus for hyperv emulation x86/kvm: add boot parameter for setting max number of vcpus per guest .../admin-guide/kernel-parameters.txt | 25 +++++++++ arch/x86/include/asm/kvm_host.h | 29 +++++----- arch/x86/kvm/hyperv.c | 15 +++--- arch/x86/kvm/ioapic.c | 20 ++++++- arch/x86/kvm/ioapic.h | 4 +- arch/x86/kvm/irq_comm.c | 9 +++- arch/x86/kvm/x86.c | 54 ++++++++++++++++++- 7 files changed, 128 insertions(+), 28 deletions(-) -- 2.26.2