Re: PCI, isolcpus, and irq affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 12 2020 at 21:24, David Woodhouse wrote:
> On Mon, 2020-10-12 at 21:31 +0200, Thomas Gleixner wrote:
>> > In this case could disk I/O submitted by one of those CPUs end up 
>> > interrupting another one?
>> 
>> On older kernels, yes.
>> 
>> X86 enforces effective single CPU affinity for interrupts since v4.15.
>
> Is that here to stay?

Yes. The way how logical mode works is that it sends the vast majority
to the first CPU in the logical mask. So the benefit is pretty much zero
and we haven't had anyone complaining since we switched to that mode.

Having single CPU affinity enforced made the whole x86 affinity
disaster^Wlogic way simpler and also reduced vector pressure
significantly.

> Because it means that sending external interrupts
> in logical mode is kind of pointless, and we might as well do this...
>
> --- a/arch/x86/kernel/apic/x2apic_cluster.c
> +++ b/arch/x86/kernel/apic/x2apic_cluster.c
> @@ -187,3 +187,3 @@ static struct apic apic_x2apic_cluster __ro_after_init = {
>         .irq_delivery_mode              = dest_Fixed,
> -       .irq_dest_mode                  = 1, /* logical */
> +       .irq_dest_mode                  = 0, /* physical */
>  
> @@ -205,3 +205,3 @@ static struct apic apic_x2apic_cluster __ro_after_init = {
>  
> -       .calc_dest_apicid               = x2apic_calc_apicid,
> +       .calc_dest_apicid               = apic_default_calc_apicid,
>  
>
> And then a bunch of things which currently set x2apic_phys just because
> of *external* IRQ limitations, no longer have to, and can still benefit
> from multicast of IPIs to whole clusters at a time.

Indeed, never thought about that.

Thanks,

        tglx



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux