On 27/07/2023 11:56, Ming Lei wrote:
As I mentioned, I think that the driver is forced to allocate all 32 MSI
vectors, even though it really needs max of 32 - iopoll_q_cnt, i.e. we don't
need an MSI vector for a HW queue assigned to polling. Then, since it gets
16x MSI for cq vectors, it subtracts iopoll_q_cnt to give the desired count
in cq_nvecs.
Looks the current driver wires nr_irq_queues with nr_poll_queues, which is
wrong logically.
Here:
1) hisi_hba->queue_count means the max supported queue count(irq queues
+ poll queues)
JFYI, from
https://github.com/tianocore/edk2-platforms/blob/master/Silicon/Hisilicon/Hi1620/Hi1620AcpiTables/Dsdt/Hi1620Pci.asl#L587C25-L587C41,
hisi_hba->queue_count is fixed at 16.
2) max vectors(32) means the max supported irq queues,
I'll just mention some details of this HW, in case missed.
For this HW, max vectors is 32, but not all are for completion queue
interrupts.
interrupts 0-15 are for housekeeping, like phy up notification
interrupts 16-31 are for completion queue interrupts
That is a fixed assignment. irq #16 is for HW queue #0 interrupt, irq
#17 is for HW queue #1 interrupt, and so on.
but actual
nr_irq_queues can be less than allocated vectors because of the irq
allocation bug
3) so the max supported poll queues should be (hisi_hba->queue_count -
nr_irq_queues).
What I meant is that nr_poll_queues shouldn't be related with allocated
msi vectors.
Sure, I agree with that. nr_poll_queues is set in that driver as a
module param.
I am just saying that we have a fixed number of HW queues (16), each of
which may be used for interrupt or polling mode. And since we always
allocate max number of MSI, then number of interrupt queues available
will be 16 - nr_poll_queues.
So it isn't related with driver's msi vector allocation bug, is it?
My deduction is this is how this currently "works" for non-zero iopoll
queues:
- allocate max MSI of 32, which gives 32 vectors including 16 cq vectors.
That then gives:
- cq_nvecs = 16 - iopoll_q_cnt
- shost->nr_hw_queues = 16
- 16x MSI cq vectors were spread over all CPUs
It should be that cq_nvecs vectors spread over all CPUs, and
iopoll_q_cnt are spread over all CPUs too.
I agree, it should be, but I don't think that it is for
HCTX_TYPE_DEFAULT, as below.
For each queue type, nr_queues of this type are spread over all
CPUs. >> - in hisi_sas_map_queues()
- HCTX_TYPE_DEFAULT qmap->nr_queues = 16 - iopoll_q_cnt, and for
blk_mq_pci_map_queues() we setup affinity for 16 - iopoll_q_cnt hw queues.
This looks broken, as we originally spread 16x vectors over all CPUs, but
now only setup mappings for (16 - iopoll_q_cnt) vectors, whose affinity
would spread a subset of CPUs. And then qmap->mq_map[] for other CPUs is not
set at all.
That isn't true, please see my above comment.
I am just basing that on what I mention above, so please let me know my
inaccuracy there.
Thanks,
John