I observed that there is one msix vector for config and one shared vector for all queues in below qemu cmdline, when the num-queues for virtio-blk is more than the number of possible cpus: qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=6" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 ... ... 24: 0 0 0 0 PCI-MSI 65536-edge virtio0-config 25: 0 0 0 59 PCI-MSI 65537-edge virtio0-virtqueues ... ... However, when num-queues is the same as number of possible cpus: qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=4" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 ... ... 24: 0 0 0 0 PCI-MSI 65536-edge virtio0-config 25: 2 0 0 0 PCI-MSI 65537-edge virtio0-req.0 26: 0 35 0 0 PCI-MSI 65538-edge virtio0-req.1 27: 0 0 32 0 PCI-MSI 65539-edge virtio0-req.2 28: 0 0 0 0 PCI-MSI 65540-edge virtio0-req.3 ... ... In above case, there is one msix vector per queue. This is because the max number of queues is not limited by the number of possible cpus. By default, nvme (regardless about write_queues and poll_queues) and xen-blkfront limit the number of queues with num_possible_cpus(). Is this by design on purpose, or can we fix with below? diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 4bc083b..df95ce3 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk) if (err) num_vqs = 1; + num_vqs = min(num_possible_cpus(), num_vqs); + vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL); if (!vblk->vqs) return -ENOMEM; -- PS: The same issue is applicable to virtio-scsi as well. Thank you very much! Dongli Zhang