6.6-stable review patch. If anyone has any objections, please let me know. ------------------ From: Yury Norov <yury.norov@xxxxxxxxx> [ Upstream commit 9ecea9ae4d3127a09fb5dfcea87f248937a39ff5 ] sched_numa_find_nth_cpu() doesn't handle NUMA_NO_NODE properly, and may crash kernel if passed with it. On the other hand, the only user of sched_numa_find_nth_cpu() has to check NUMA_NO_NODE case explicitly. It would be easier for users if this logic will get moved into sched_numa_find_nth_cpu(). Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Link: https://lore.kernel.org/r/20230819141239.287290-6-yury.norov@xxxxxxxxx Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- kernel/sched/topology.c | 3 +++ lib/cpumask.c | 4 +--- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 8c1e183329d97..3a13cecf17740 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2126,6 +2126,9 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) struct cpumask ***hop_masks; int hop, ret = nr_cpu_ids; + if (node == NUMA_NO_NODE) + return cpumask_nth_and(cpu, cpus, cpu_online_mask); + rcu_read_lock(); /* CPU-less node entries are uninitialized in sched_domains_numa_masks */ diff --git a/lib/cpumask.c b/lib/cpumask.c index a7fd02b5ae264..34335c1e72653 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -146,9 +146,7 @@ unsigned int cpumask_local_spread(unsigned int i, int node) /* Wrap: we always want a cpu. */ i %= num_online_cpus(); - cpu = (node == NUMA_NO_NODE) ? - cpumask_nth(i, cpu_online_mask) : - sched_numa_find_nth_cpu(cpu_online_mask, i, node); + cpu = sched_numa_find_nth_cpu(cpu_online_mask, i, node); WARN_ON(cpu >= nr_cpu_ids); return cpu; -- 2.43.0