On Mon, Nov 27, 2023 at 09:36:38AM +0000, Souradeep Chakrabarti wrote: > > > >-----Original Message----- > >From: Jakub Kicinski <kuba@xxxxxxxxxx> > >Sent: Wednesday, November 22, 2023 5:19 AM > >To: Souradeep Chakrabarti <schakrabarti@xxxxxxxxxxxxxxxxxxx> > >Cc: KY Srinivasan <kys@xxxxxxxxxxxxx>; Haiyang Zhang > ><haiyangz@xxxxxxxxxxxxx>; wei.liu@xxxxxxxxxx; Dexuan Cui > ><decui@xxxxxxxxxxxxx>; davem@xxxxxxxxxxxxx; edumazet@xxxxxxxxxx; > >pabeni@xxxxxxxxxx; Long Li <longli@xxxxxxxxxxxxx>; > >sharmaajay@xxxxxxxxxxxxx; leon@xxxxxxxxxx; cai.huoqing@xxxxxxxxx; > >ssengar@xxxxxxxxxxxxxxxxxxx; vkuznets@xxxxxxxxxx; tglx@xxxxxxxxxxxxx; linux- > >hyperv@xxxxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; > >linux-rdma@xxxxxxxxxxxxxxx; Souradeep Chakrabarti > ><schakrabarti@xxxxxxxxxxxxx>; Paul Rosswurm <paulros@xxxxxxxxxxxxx> > >Subject: [EXTERNAL] Re: [PATCH V2 net-next] net: mana: Assigning IRQ affinity on > >HT cores > > > >On Tue, 21 Nov 2023 05:54:37 -0800 Souradeep Chakrabarti wrote: > >> Existing MANA design assigns IRQ to every CPUs, including sibling > >> hyper-threads in a core. This causes multiple IRQs to work on same CPU > >> and may reduce the network performance with RSS. > >> > >> Improve the performance by adhering the configuration for RSS, which > >> assigns IRQ on HT cores. > > > >Drivers should not have to carry 120 LoC for something as basic as spreading IRQs. > >Please take a look at include/linux/topology.h and if there's nothing that fits your > >needs there - add it. That way other drivers can reuse it. > Because of the current design idea, it is easier to keep things inside > the mana driver code here. As the idea of IRQ distribution here is : > 1)Loop through interrupts to assign CPU > 2)Find non sibling online CPU from local NUMA and assign the IRQs > on them. > 3)If number of IRQs is more than number of non-sibling CPU in that > NUMA node, then assign on sibling CPU of that node. > 4)Keep doing it till all the online CPUs are used or no more IRQs. > 5)If all CPUs in that node are used, goto next NUMA node with CPU. > Keep doing 2 and 3. > 6) If all CPUs in all NUMA nodes are used, but still there are IRQs > then wrap over from first local NUMA node and continue > doing 2, 3 4 till all IRQs are assigned. Hi Souradeep, (Thanks Jakub for sharing this thread with me) If I understand your intention right, you can leverage the existing cpumask_local_spread(). But I think I've got something better for you. The below series adds a for_each_numa_cpu() iterator, which may help you doing most of the job without messing with nodes internals. https://lore.kernel.org/netdev/ZD3l6FBnUh9vTIGc@yury-ThinkPad/T/ By using it, the pseudocode implementing your algorithm may look like this: unsigned int cpu, hop; unsigned int irq = 0; again: cpu = get_cpu(); node = cpu_to_node(cpu); cpumask_copy(cpus, cpu_online_mask); for_each_numa_cpu(cpu, hop, node, cpus) { /* All siblings are the same for IRQ spreading purpose */ irq_set_affinity_and_hint(irq, topology_sibling_cpumask()); /* One IRQ per sibling group */ cpumask_andnot(cpus, cpus, topology_sibling_cpumask()); if (++irq == num_irqs) break; } if (irq < num_irqs) goto again; (Completely not tested, just an idea.) Thanks, Yury