Re:Re: [PATCH] make blk_mq_map_queues more friendly for cpu topology

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Actually,  I just bought one vm from public cloud provider and run into this problem.
after reading code and compare pci device info, I reproduce this scenario.

Since common users cannot change msi vector numbers, so I suggest blk_mq_map_queues to be 
more friendly. blk_mq_map_queues may be the  last choice. 





At 2019-03-27 16:16:19, "Christoph Hellwig" <hch@xxxxxx> wrote:
>On Tue, Mar 26, 2019 at 03:55:10PM +0800, luferry wrote:
>> 
>> 
>> 
>> At 2019-03-26 15:39:54, "Christoph Hellwig" <hch@xxxxxx> wrote:
>> >Why isn't this using the automatic PCI-level affinity assignment to
>> >start with?
>> 
>> When enable virtio-blk with multi queues but with only 2 msix-vector.
>> vp_dev->per_vq_vectors will be false, vp_get_vq_affintity will return NULL directly
>> so blk_mq_virtio_map_queues will fallback to blk_mq_map_queues.
>
>What is the point of the multiqueue mode if you don't have enough
>(virtual) MSI-X vectors?




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux