Re: PCI, isolcpus, and irq affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 12 2020 at 11:58, Bjorn Helgaas wrote:
> On Mon, Oct 12, 2020 at 09:49:37AM -0600, Chris Friesen wrote:
>> I've got a linux system running the RT kernel with threaded irqs.  On
>> startup we affine the various irq threads to the housekeeping CPUs, but I
>> recently hit a scenario where after some days of uptime we ended up with a
>> number of NVME irq threads affined to application cores instead (not good
>> when we're trying to run low-latency applications).

These threads and the associated interupt vectors are completely
harmless and fully idle as long as there is nothing on those isolated
CPUs which does disk I/O.

> pci_alloc_irq_vectors_affinity() basically just passes affinity
> information through to kernel/irq/affinity.c, and the PCI core doesn't
> change affinity after that.

Correct.

> This recent thread may be useful:
>
>   https://lore.kernel.org/linux-pci/20200928183529.471328-1-nitesh@xxxxxxxxxx/
>
> It contains a patch to "Limit pci_alloc_irq_vectors() to housekeeping
> CPUs".  I'm not sure that patch summary is 100% accurate because IIUC
> that particular patch only reduces the *number* of vectors allocated
> and does not actually *limit* them to housekeeping CPUs.

That patch is a bandaid at best and for the managed interrupt scenario
not really preventing that interrupts + threads are affine to isolated
CPUs.

Thanks,

        tglx





[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux