On Mon, 3 Sep 2018, Kashyap Desai wrote: > I am using " for-4.19/block " and this particular patch "a0c9259 > irq/matrix: Spread interrupts on allocation" is included. Can you please try against 4.19-rc2 or later? > I can see that 16 extra reply queues via pre_vectors are still assigned to > CPU 0 (effective affinity ). > > irq 33, cpu list 0-71 The cpu list is irrelevant because that's the allowed affinity mask. The effective one is what counts. > # cat /sys/kernel/debug/irq/irqs/34 > node: 0 > affinity: 0-71 > effectiv: 0 So if all 16 have their effective affinity set to CPU0 then that's strange at least. Can you please provide the output of /sys/kernel/debug/irq/domains/VECTOR ? > Ideally, what we are looking for 16 extra pre_vector reply queue is > "effective affinity" to be within local numa node as long as that numa > node has online CPUs. If not, we are ok to have effective cpu from any > node. Well, we surely can do the initial allocation and spreading on the local numa node, but once all CPUs are offline on that node, then the whole thing goes down the drain and allocates from where it sees fit. I'll think about it some more, especially how to avoid the proliferation of the affinity hint. Thanks, tglx