Hi folks, Tariq pointed out in [1] that drivers allocating IRQ vectors would benefit from having smarter NUMA-awareness (cpumask_local_spread() doesn't quite cut it). The proposed interface involved an array of CPUs and a temporary cpumask, and being my difficult self what I'm proposing here is an interface that doesn't require any temporary storage other than some stack variables (at the cost of one wild macro). [1]: https://lore.kernel.org/all/20220728191203.4055-1-tariqt@xxxxxxxxxx/ Revisions ========= v4 -> v5 ++++++++ o Rebased onto 6.1-rc1 o Ditched the CPU iterator, moved to a cpumask iterator (Yury) v3 -> v4 ++++++++ o Rebased on top of Yury's bitmap-for-next o Added Tariq's mlx5e patch o Made sched_numa_hop_mask() return cpu_online_mask for the NUMA_NO_NODE && hops=0 case v2 -> v3 ++++++++ o Added for_each_cpu_and() and for_each_cpu_andnot() tests (Yury) o New patches to fix issues raised by running the above o New patch to use for_each_cpu_andnot() in sched/core.c (Yury) v1 -> v2 ++++++++ o Split _find_next_bit() @invert into @invert1 and @invert2 (Yury) o Rebase onto v6.0-rc1 Cheers, Valentin Tariq Toukan (1): net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity hints Valentin Schneider (2): sched/topology: Introduce sched_numa_hop_mask() sched/topology: Introduce for_each_numa_hop_mask() drivers/net/ethernet/mellanox/mlx5/core/eq.c | 18 +++++++++-- include/linux/topology.h | 32 ++++++++++++++++++++ kernel/sched/topology.c | 31 +++++++++++++++++++ 3 files changed, 79 insertions(+), 2 deletions(-) -- 2.31.1