Re: [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/14/2016 09:58 PM, Christoph Hellwig wrote:
From: Thomas Gleixner <tglx@xxxxxxxxxxxxx>

Interupts marked with this flag are excluded from user space interrupt
affinity changes. Contrary to the IRQ_NO_BALANCING flag, the kernel internal
affinity mechanism is not blocked.

This flag will be used for multi-queue device interrupts.

It's great to see that the goal of this patch series is to configure interrupt affinity automatically for adapters that support multiple MSI-X vectors. However, is excluding these interrupts from irqbalanced really the way to go? Suppose e.g. that a system is equipped with two RDMA adapters, that these adapters are used by a blk-mq enabled block initiator driver and that each adapter supports eight MSI-X vectors. Should the interrupts of the two RDMA adapters be assigned to different CPU cores? If so, which software layer should realize this? The kernel or user space?

Sorry that I missed the first version of this patch series.

Thanks,

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux