Re: [PATCH V2 5/9] scsi: hisi: take blk_mq_max_nr_hw_queues() into account for calculating io vectors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





I am just saying that we have a fixed number of HW queues (16), each of
which may be used for interrupt or polling mode. And since we always
allocate max number of MSI, then number of interrupt queues available will
be 16 - nr_poll_queues.

No.

queue_count is fixed at 16, but pci_alloc_irq_vectors_affinity() still
may return less vectors, which is one system-wide resource, and queue
count is device resource.

So when less vectors are allocated, you should have been capable of using
more poll queues, unfortunately the current code can't support that.

Even worse, hisi_hba->cq_nvecs can become negative if less vectors are returned.

OK, I see what you mean here. I thought that we were only considering case of vectors allocated was max vectors requested.

Yes, I see how allocating less than max can cause an issue. I am not sure if increasing iopoll_q_cnt over driver module param value is proper then, but obviously we don't want cq_nvecs to become negative.





So it isn't related with driver's msi vector allocation bug, is it?
My deduction is this is how this currently "works" for non-zero iopoll
queues:
- allocate max MSI of 32, which gives 32 vectors including 16 cq vectors.
That then gives:
     - cq_nvecs = 16 - iopoll_q_cnt
     - shost->nr_hw_queues = 16
     - 16x MSI cq vectors were spread over all CPUs
It should be that cq_nvecs vectors spread over all CPUs, and
iopoll_q_cnt are spread over all CPUs too.

I agree, it should be, but I don't think that it is for HCTX_TYPE_DEFAULT,
as below.


For each queue type, nr_queues of this type are spread over all
CPUs. >> - in hisi_sas_map_queues()
     - HCTX_TYPE_DEFAULT qmap->nr_queues = 16 - iopoll_q_cnt, and for
blk_mq_pci_map_queues() we setup affinity for 16 - iopoll_q_cnt hw queues.
This looks broken, as we originally spread 16x vectors over all CPUs, but
now only setup mappings for (16 - iopoll_q_cnt) vectors, whose affinity
would spread a subset of CPUs. And then qmap->mq_map[] for other CPUs is not
set at all.
That isn't true, please see my above comment.

I am just basing that on what I mention above, so please let me know my
inaccuracy there.

You said queue mapping for HCTX_TYPE_DEFAULT is broken, but it isn't.

You said 'we originally spread 16x vectors over all CPUs', which isn't
true.

Are you talking about the case of allocating less then max requested vectors, as above?

If we have min_msi = 17, max_msi = 32, affinity_desc = {16, 0}, and we allocate 32 vectors from pci_alloc_irq_vectors_affinity(), then I would have thought that the affinity for the 16x cq vectors is spread over all CPUs. Is that wrong?

Again, '16 - iopoll_q_cnt' vectors are spread on all CPUs, and
same with iopoll_q_cnt vectors.

Since both blk_mq_map_queues() and blk_mq_pci_map_queues() does spread
map->nr_queues over all CPUs, so there isn't spread a subset of CPUs.


Thanks,
John




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux