We would like to be able to create large VMs (up to 224 vCPUs atm) with up to 128 virtio-net cards, where each card needs a TX+RX queue per vCPU for optimal performance (as well as config & control interrupts per card). Adding in extra virtio-blk controllers with a queue per vCPU (up to 192 disks) yields a total of about ~100k IRQ routes, rounded up to 128k for extra headroom and flexibility. The current limit of 4096 was set in 2018 and is too low for modern demands. It also seems to be there for no good reason as routes are allocated lazily by the kernel anyway (depending on the largest GSI requested by the VM). Signed-off-by: Daniil Tatianin <d-tatianin@xxxxxxxxxxxxxx> --- include/linux/kvm_host.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 48f31dcd318a..10a141add2a8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2093,7 +2093,7 @@ static inline bool mmu_invalidate_retry_gfn_unsafe(struct kvm *kvm, #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING -#define KVM_MAX_IRQ_ROUTES 4096 /* might need extension/rework in the future */ +#define KVM_MAX_IRQ_ROUTES 131072 /* might need extension/rework in the future */ bool kvm_arch_can_set_irq_routing(struct kvm *kvm); int kvm_set_irq_routing(struct kvm *kvm, -- 2.34.1