For CONFIG_CPUMASK_OFFSTACK=y kernel, explicit allocation of cpumask variable on stack is not recommended since it can cause potential stack overflow. Instead, kernel code should always use *cpumask_var API(s) to allocate cpumask var in config-neutral way, leaving allocation strategy to CONFIG_CPUMASK_OFFSTACK. But dynamic allocation in cpuhp's teardown callback is somewhat problematic for if allocation fails(which is unlikely but still possible): - If -ENOMEM is returned to caller, kernel crashes for non-bringup teardown; - If callback pretends nothing happened and returns 0 to caller, it may trap system into an in-consisitent/compromised state; Use newly-introduced cpumask_any_and_but() to address all issues above. It eliminates usage of temporary cpumask var in generic way, no matter how the cpumask var is allocated. Suggested-by: Mark Rutland <mark.rutland@xxxxxxx> Signed-off-by: Dawei Li <dawei.li@xxxxxxxxxxxx> --- drivers/perf/arm-cmn.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c index 7ef9c7e4836b..6bfb0c4a1287 100644 --- a/drivers/perf/arm-cmn.c +++ b/drivers/perf/arm-cmn.c @@ -1950,20 +1950,20 @@ static int arm_cmn_pmu_offline_cpu(unsigned int cpu, struct hlist_node *cpuhp_no struct arm_cmn *cmn; unsigned int target; int node; - cpumask_t mask; cmn = hlist_entry_safe(cpuhp_node, struct arm_cmn, cpuhp_node); if (cpu != cmn->cpu) return 0; node = dev_to_node(cmn->dev); - if (cpumask_and(&mask, cpumask_of_node(node), cpu_online_mask) && - cpumask_andnot(&mask, &mask, cpumask_of(cpu))) - target = cpumask_any(&mask); - else + + target = cpumask_any_and_but(cpumask_of_node(node), cpu_online_mask, cpu); + if (target >= nr_cpu_ids) target = cpumask_any_but(cpu_online_mask, cpu); + if (target < nr_cpu_ids) arm_cmn_migrate(cmn, target); + return 0; } -- 2.27.0