From: Valentin Schneider <vschneid@xxxxxxxxxx> Tariq has pointed out that drivers allocating IRQ vectors would benefit from having smarter NUMA-awareness - cpumask_local_spread() only knows about the local node and everything outside is in the same bucket. sched_domains_numa_masks is pretty much what we want to hand out (a cpumask of CPUs reachable within a given distance budget), introduce sched_numa_hop_mask() to export those cpumasks. Link: http://lore.kernel.org/r/20220728191203.4055-1-tariqt@xxxxxxxxxx Signed-off-by: Valentin Schneider <vschneid@xxxxxxxxxx> Reviewed-by: Yury Norov <yury.norov@xxxxxxxxx> Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx> --- include/linux/topology.h | 7 +++++++ kernel/sched/topology.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 40 insertions(+) diff --git a/include/linux/topology.h b/include/linux/topology.h index 72f264575698..344c2362755a 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -247,11 +247,18 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu) #ifdef CONFIG_NUMA int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node); +extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops); #else static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) { return cpumask_nth(cpu, cpus); } + +static inline const struct cpumask * +sched_numa_hop_mask(unsigned int node, unsigned int hops) +{ + return ERR_PTR(-EOPNOTSUPP); +} #endif /* CONFIG_NUMA */ #endif /* _LINUX_TOPOLOGY_H */ diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 2bf89186a10f..1233affc106c 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2124,6 +2124,39 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) return ret; } EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu); + +/** + * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from + * @node + * @node: The node to count hops from. + * @hops: Include CPUs up to that many hops away. 0 means local node. + * + * Return: On success, a pointer to a cpumask of CPUs at most @hops away from + * @node, an error value otherwise. + * + * Requires rcu_lock to be held. Returned cpumask is only valid within that + * read-side section, copy it if required beyond that. + * + * Note that not all hops are equal in distance; see sched_init_numa() for how + * distances and masks are handled. + * Also note that this is a reflection of sched_domains_numa_masks, which may change + * during the lifetime of the system (offline nodes are taken out of the masks). + */ +const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops) +{ + struct cpumask ***masks; + + if (node >= nr_node_ids || hops >= sched_domains_numa_levels) + return ERR_PTR(-EINVAL); + + masks = rcu_dereference(sched_domains_numa_masks); + if (!masks) + return ERR_PTR(-EBUSY); + + return masks[hops][node]; +} +EXPORT_SYMBOL_GPL(sched_numa_hop_mask); + #endif /* CONFIG_NUMA */ static int __sdt_alloc(const struct cpumask *cpu_map) -- 2.34.1