TL;DR: any particular reason why KVM_USER_MEM_SLOTS is so low? Longer version: Current KVM_USER_MEM_SLOTS limit (509) can be a limiting factor for some configurations. In particular, when QEMU tries to start a Windows guest with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC requires two pages per vCPU and the guest is free to pick any GFN for each of them, this fragments memslots as QEMU wants to have a separate memslot for each of these pages (which are supposed to act as 'overlay' pages). Memory slots are allocated dynamically in KVM when added so the only real limitation is 'id_to_index' array which is 'short'. We don't have any KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined arrays. We could've just raised the limit to e.g. '1021' (we have 3 private memslots on x86) and this should be enough for now as KVM_MAX_VCPUS is '288' but AFAIK there are plans to raise this limit as well. Vitaly Kuznetsov (4): KVM: x86: Drop redundant KVM_MEM_SLOTS_NUM definition KVM: mips: Drop KVM_PRIVATE_MEM_SLOTS definition KVM: Define KVM_USER_MEM_SLOTS in arch-neutral include/linux/kvm_host.h KVM: x86: Stop limiting KVM_USER_MEM_SLOTS arch/mips/include/asm/kvm_host.h | 2 -- arch/x86/include/asm/kvm_host.h | 3 +-- include/linux/kvm_host.h | 4 ++++ 3 files changed, 5 insertions(+), 4 deletions(-) -- 2.29.2