Re: [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/15/2016 12:23 PM, Christoph Hellwig wrote:
Hi Bart,

On Wed, Jun 15, 2016 at 10:44:37AM +0200, Bart Van Assche wrote:
However, is excluding these interrupts from irqbalanced really the
way to go?

What positive effect will irqbalanced have on explcititly spread
interrupts?

Suppose e.g. that a system is equipped with two RDMA adapters,
that these adapters are used by a blk-mq enabled block initiator driver and
that each adapter supports eight MSI-X vectors. Should the interrupts of
the two RDMA adapters be assigned to different CPU cores? If so, which
software layer should realize this? The kernel or user space?

RDMA should eventually use the interrupt spreading implemented in this
series, as should networking (RDMA actually is on my near term todo list).

RDMA block protocols will then pick up the queue information from the
HCA driver.  I've not actually implemented this yet, but my current idea
is:

 - the HCA drivers are switch to use pci_alloc_irq_vectors to spread
   their interrupt vectors around the system
 - the HCA drivers will expose the irq_affinity affinity array
   in struct ib_device (we'll need to consider what do about the
   odd completion vectors instead of irq terminology in the RDMA stack,
   but that's not a show stopper)
 - multiqueue aware block drivers will then feed the irq_affinity
   cpumask from the hca driver to blk-mq.  We'll also need to ensure
   the number of protocol queues aligns nicely to the number of hardware
   queues.  My current thinking is that they should be the same or
   a fraction of the hardware completion queues, but this might need
   some careful benchmarking.

Hello Christoph,

Today irqbalanced is responsible for deciding how to assign interrupts from different adapters to CPU cores. Does the above mean that for adapters that support multiple MSI-X interrupts the kernel will have full responsibility for assigning interrupt vectors to CPU cores?

If two identical adapters are present in a system, will these generate the same irq_affinity mask? Do you agree that interrupt vectors from different adapters should be assigned to different CPU cores if enough CPU cores are available? If so, which software layer will assign interrupt vectors from different adapters to different CPU cores?

Thanks,

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux