OK, I have a rough idea of the concept. And again I'd say megaraid sas
may not be a good candidate to expose > 1 HW queues, as we hide HW
queues and don't maintain the symmetry with blk-mq layer.
Sorry, my last response was not very clear. I was referring to reply
queues as HW queues
not submission queues. I agree with you, since megaraid_sas HW has
single submission
queue so >1 HW queue would not help to improve performance. Testing
done by us on shared
tagset patches worked by you/Hannes was to ensure no performance drop
from single HW
submission queue based driver.
OK, but I still have concern with this. That's your choice.
Indeed, I do not even expect a performance increase in exposing > 1 HW
queues since the driver already uses the reply map + managed interrupts.
The main reason for that change in some drivers - apart from losing the
duplicated ugliness of the reply map - is to leverage the blk-mq feature
to drain a hctx for CPU hotplug [0] - is this something which megaraid
sas is vulnerable to and would benefit from?
"megaraid_sas" driver would be benefited with draining of IO
completions directed to
hotplugged(offlined) CPU. With current driver IO completion would
hang, if CPU on which IO is to be
completed goes offline.
But that feature will only work for the queues which you expose. For the
low-latency queues, there would be no draining*. However, the
low-latency interrupts are not managed; as such, I think that their
interrupts would migrate when their cpumask goes offline, rather than
being shutdown, so not vulnerable to this problem.
* In principle, since you can submit the scsi request on different hw
queue than expected from blk-mq perspective, when we offline the cpu
which blk-mq set to submit on, blk-mq may actually wait for requests to
complete on these low-latency queues and addition to the HW queue which
blk-mq thought that the request would be submitted on - again, not
ideal, and may cause problems.
Thanks,
John