No rdma device exposes its irq vectors affinity today. So the only
mapping that we have left, is the default blk_mq_map_queues, which
we fallback to anyways. Also fixup the only consumer of this helper
(nvme-rdma).
This was the only caller of ib_get_vector_affinity() so please delete
op get_vector_affinity and ib_get_vector_affinity() from verbs as well
Yep, no problem.
Given that nvme-rdma was the only consumer, do you prefer this goes from
the nvme tree?
Sure, it is probably fine
I tried to do it two+ years ago:
https://lore.kernel.org/all/20200929091358.421086-1-leon@xxxxxxxxxx
Christoph's points make sense, but I think we should still purge this
code.
If we want to do proper managed affinity the right RDMA API is to
directly ask for the desired CPU binding when creating the CQ, and
optionally a way to change the CPU binding of the CQ at runtime.
I think the affinity management is referring to IRQD_AFFINITY_MANAGED
which IIRC is the case when the device passes `struct irq_affinity` to
pci_alloc_irq_vectors_affinity.
Not sure what that has to do with passing a cpu to create_cq.
This obfuscated 'comp vector number' thing is nonsensical for a kAPI -
creating a CQ on a random CPU then trying to backwards figure out what
CPU it was created on is silly.
I don't remember if the comp_vector maps 1x1 to an irq vector, and if it
isn't then it is indeed obfuscated. But a similar model is heavily used
by the network stack with cpu_rmap, where this was derived from.
But regardless, its been two years, it is effectively dead code, and not
a single user complained about missing it. So we can safely purge them
and if someone cares about it, we can debate adding it back.