If one vector is spread on several CPUs, usually the interrupt is only handled on one of these CPUs. Meantime, IO can be issued to the single hw queue from different CPUs concurrently, this way is easy to cause IRQ flood and CPU lockup. Pass IRQF_RESCURE_THREAD in above case for asking genirq to handle interrupt in the rescurd thread when irq flood is detected. Cc: Long Li <longli@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx>, Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Keith Busch <keith.busch@xxxxxxxxx> Cc: Jens Axboe <axboe@xxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Sagi Grimberg <sagi@xxxxxxxxxxx> Cc: John Garry <john.garry@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Hannes Reinecke <hare@xxxxxxxx> Cc: linux-nvme@xxxxxxxxxxxxxxxxxxx Cc: linux-scsi@xxxxxxxxxxxxxxx Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> --- drivers/nvme/host/pci.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 45a80b708ef4..0b8d49470230 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1501,8 +1501,21 @@ static int queue_request_irq(struct nvme_queue *nvmeq) return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq_check, nvme_irq, nvmeq, "nvme%dq%d", nr, nvmeq->qid); } else { - return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq, - NULL, nvmeq, "nvme%dq%d", nr, nvmeq->qid); + char *devname; + const struct cpumask *mask; + unsigned long irqflags = IRQF_SHARED; + int vector = pci_irq_vector(pdev, nvmeq->cq_vector); + + devname = kasprintf(GFP_KERNEL, "nvme%dq%d", nr, nvmeq->qid); + if (!devname) + return -ENOMEM; + + mask = pci_irq_get_affinity(pdev, nvmeq->cq_vector); + if (mask && cpumask_weight(mask) > 1) + irqflags |= IRQF_RESCUE_THREAD; + + return request_threaded_irq(vector, nvme_irq, NULL, irqflags, + devname, nvmeq); } } -- 2.20.1