Re: PCI, isolcpus, and irq affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 12 2020 at 12:58, Chris Friesen wrote:
> On 10/12/2020 11:50 AM, Thomas Gleixner wrote:
>> On Mon, Oct 12 2020 at 11:58, Bjorn Helgaas wrote:
>>> On Mon, Oct 12, 2020 at 09:49:37AM -0600, Chris Friesen wrote:
>>>> I've got a linux system running the RT kernel with threaded irqs.  On
>>>> startup we affine the various irq threads to the housekeeping CPUs, but I
>>>> recently hit a scenario where after some days of uptime we ended up with a
>>>> number of NVME irq threads affined to application cores instead (not good
>>>> when we're trying to run low-latency applications).
>> 
>> These threads and the associated interupt vectors are completely
>> harmless and fully idle as long as there is nothing on those isolated
>> CPUs which does disk I/O.
>
> Some of the irq threads are affined (by the kernel presumably) to 
> multiple CPUs (nvme1q2 and nvme0q2 were both affined 0x38000038, a 
> couple of other queues were affined 0x1c00001c0).
>
> In this case could disk I/O submitted by one of those CPUs end up 
> interrupting another one?

On older kernels, yes.

X86 enforces effective single CPU affinity for interrupts since v4.15.

The associated irq thread is always following the hardware effective
interrupt affinity since v4.17.

The hardware interrupt itself is routed to a housekeeping CPU in the
affinity mask as long as there is one online since v5.6.

Thanks,

        tglx





[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux