RE: Affinity managed interrupts vs non-managed interrupts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
> On Mon, 3 Sep 2018, Kashyap Desai wrote:
> > I am using " for-4.19/block " and this particular patch "a0c9259
> > irq/matrix: Spread interrupts on allocation" is included.
>
> Can you please try against 4.19-rc2 or later?
>
> > I can see that 16 extra reply queues via pre_vectors are still
assigned to
> > CPU 0 (effective affinity ).
> >
> > irq 33, cpu list 0-71
>
> The cpu list is irrelevant because that's the allowed affinity mask. The
> effective one is what counts.
>
> > # cat /sys/kernel/debug/irq/irqs/34
> > node:     0
> > affinity: 0-71
> > effectiv: 0
>
> So if all 16 have their effective affinity set to CPU0 then that's
strange
> at least.
>
> Can you please provide the output of
> /sys/kernel/debug/irq/domains/VECTOR ?

I tried 4.19-rc2. Same behavior as I posted earlier. All 16 pre_vector irq
has effective CPU = 0.

Here is output of "/sys/kernel/debug/irq/domains/VECTOR"

# cat /sys/kernel/debug/irq/domains/VECTOR
name:   VECTOR
 size:   0
 mapped: 360
 flags:  0x00000041
Online bitmaps:       72
Global available:  13062
Global reserved:      86
Total allocated:     274
System: 43: 0-19,32,50,128,236-255
 | CPU | avl | man | act | vectors
     0   169    17    32  33-49,51-65
     1   181    17     4  33,36,52-53
     2   181    17     4  33-36
     3   181    17     4  33-34,52-53
     4   181    17     4  33,35,53-54
     5   181    17     4  33,35-36,54
     6   182    17     3  33,35-36
     7   182    17     3  33-34,36
     8   182    17     3  34-35,53
     9   181    17     4  33-34,52-53
    10   182    17     3  34,36,53
    11   182    17     3  34-35,54
    12   182    17     3  33-34,53
    13   182    17     3  33,37,55
    14   181    17     4  33-36
    15   181    17     4  33,35-36,54
    16   181    17     4  33,35,53-54
    17   182    17     3  33,36-37
    18   181    17     4  33,36,54-55
    19   181    17     4  33,35-36,54
    20   181    17     4  33,35-37
    21   180    17     5  33,35,37,55-56
    22   181    17     4  33-36
    23   181    17     4  33,35,37,55
    24   180    17     5  33-36,54
    25   181    17     4  33-36
    26   181    17     4  33-35,54
    27   181    17     4  34-36,54
    28   181    17     4  33-35,53
    29   182    17     3  34-35,53
    30   182    17     3  33-35
    31   181    17     4  34-36,54
    32   182    17     3  33-34,53
    33   182    17     3  34-35,53
    34   182    17     3  33-34,53
    35   182    17     3  34-36
    36   182    17     3  33-34,53
    37   181    17     4  33,35,52-53
    38   182    17     3  34-35,53
    39   182    17     3  34,52-53
    40   182    17     3  33-35
    41   182    17     3  34-35,53
    42   182    17     3  33-35
    43   182    17     3  34,52-53
    44   182    17     3  33-34,53
    45   182    17     3  34-35,53
    46   182    17     3  34,36,54
    47   182    17     3  33-34,52
    48   182    17     3  34,36,54
    49   182    17     3  33,51-52
    50   181    17     4  33-36
    51   182    17     3  33-35
    52   182    17     3  33-35
    53   182    17     3  34-35,53
    54   182    17     3  33-34,53
    55   182    17     3  34-36
    56   181    17     4  33-35,53
    57   182    17     3  34-36
    58   182    17     3  33-34,53
    59   181    17     4  33-35,53
    60   181    17     4  33-35,53
    61   182    17     3  33-34,53
    62   182    17     3  33-35
    63   182    17     3  34-36
    64   182    17     3  33-34,54
    65   181    17     4  33-35,53
    66   182    17     3  33-34,54
    67   182    17     3  34-36
    68   182    17     3  33-34,54
    69   182    17     3  34,36,54
    70   182    17     3  33-35
    71   182    17     3  34,36,54

>
> > Ideally, what we are looking for 16 extra pre_vector reply queue is
> > "effective affinity" to be within local numa node as long as that numa
> > node has online CPUs. If not, we are ok to have effective cpu from any
> > node.
>
> Well, we surely can do the initial allocation and spreading on the local
> numa node, but once all CPUs are offline on that node, then the whole
thing
> goes down the drain and allocates from where it sees fit. I'll think
about
> it some more, especially how to avoid the proliferation of the affinity
> hint.

Thanks for looking this request. This will help us to implement WIP
megaraid_sas driver changes.  I can test any patch you want me to try.

>
> Thanks,
>
> 	tglx



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux