MSI IRQ affinity question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I am working on a kernel module for a PCI chip. The module registers
several PCI MSI interrupts. After a certain while, during the course of
using the PCI chip, the kernel module fails to receive interrupts (IRQ
handler routine is not called), or receives them very sporadically
(after every 2 seconds or so). However, if I change the irq affinity to
CPU 16 from
the default (CPU 10), everything seems to work fine (mostly). Further,
if I change the IRQ affinity to CPU 0, I receive even fewer (almost no)
interrupts, and the chip become unusable.

Considering the fact that the CPU's are idle (no one uses CPU 10 for IRQ
handling), why does changing the IRQ affinity from CPU 10 to CPU 16
makes things better? And changing it to CPU 0 or CPU 1 makes it so much
worse. I understand there will be some performace penatly if a certain
CPU is handling interrupts from multiple devices, but that does not seem
to be the case here.

Please advise.

Thanks,
kchahal

--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux