Hi, blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask. However, all cpus in hctx->cpumask may be offline. This usage model isn't well supported by blk-mq which supposes allocator is always done on one online CPU in hctx->cpumask. This assumption is related with managed irq, which also requires blk-mq to drain inflight request in this hctx when the last cpu in hctx->cpumask is going to offline. However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow them to ask for request allocation when the specified hctx is inactive (all cpus in hctx->cpumask are offline). Fix blk_mq_alloc_request_hctx() by allowing to allocate request when all CPUs of this hctx are offline. Wen Xiong has verified V4 in her nvmef test. V5: - take John Garry's suggestion to replace device field with new helper of device_has_managed_msi_irq() V4: - remove patches for cleanup queue map helpers - take Christoph's suggestion to add field into 'struct device' for describing if managed irq is allocated from one device V3: - cleanup map queues helpers, and remove pci/virtio/rdma queue helpers - store use managed irq info into qmap V2: - use flag of BLK_MQ_F_MANAGED_IRQ - pass BLK_MQ_F_MANAGED_IRQ from driver explicitly - kill BLK_MQ_F_STACKING Ming Lei (3): driver core: add device_has_managed_msi_irq blk-mq: mark if one queue map uses managed irq blk-mq: don't deactivate hctx if managed irq isn't used block/blk-mq-pci.c | 1 + block/blk-mq-rdma.c | 3 +++ block/blk-mq-virtio.c | 1 + block/blk-mq.c | 27 ++++++++++++++++---------- block/blk-mq.h | 8 ++++++++ drivers/base/core.c | 15 ++++++++++++++ drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 1 + include/linux/blk-mq.h | 3 ++- include/linux/device.h | 2 ++ 9 files changed, 50 insertions(+), 11 deletions(-) -- 2.31.1