In case of IRQF_RESCUE_THREAD, the threaded handler is only used to handle interrupt when IRQ flood comes, use irq's affinity for this thread so that scheduler may select other not too busy CPUs for handling the interrupt. Cc: Long Li <longli@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx>, Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Keith Busch <keith.busch@xxxxxxxxx> Cc: Jens Axboe <axboe@xxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Sagi Grimberg <sagi@xxxxxxxxxxx> Cc: John Garry <john.garry@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Hannes Reinecke <hare@xxxxxxxx> Cc: linux-nvme@xxxxxxxxxxxxxxxxxxx Cc: linux-scsi@xxxxxxxxxxxxxxx Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> --- kernel/irq/manage.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 1566abbf50e8..03bc041348b7 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -968,7 +968,18 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) if (cpumask_available(desc->irq_common_data.affinity)) { const struct cpumask *m; - m = irq_data_get_effective_affinity_mask(&desc->irq_data); + /* + * Managed IRQ's affinity is setup gracefull on MUNA locality, + * also if IRQF_RESCUE_THREAD is set, interrupt flood has been + * triggered, so ask scheduler to run the thread on CPUs + * specified by this interrupt's affinity. + */ + if ((action->flags & IRQF_RESCUE_THREAD) && + irqd_affinity_is_managed(&desc->irq_data)) + m = desc->irq_common_data.affinity; + else + m = irq_data_get_effective_affinity_mask( + &desc->irq_data); cpumask_copy(mask, m); } else { valid = false; -- 2.20.1