For CONFIG_CPUMASK_OFFSTACK=y kernel, explicit allocation of cpumask variable on stack is not recommended since it can cause potential stack overflow. Instead, kernel code should always use *cpumask_var API(s) to allocate cpumask var in config-neutral way, leaving allocation strategy to CONFIG_CPUMASK_OFFSTACK. But dynamic allocation in cpuhp's teardown callback is somewhat problematic for if allocation fails(which is unlikely but still possible): - If -ENOMEM is returned to caller, kernel crashes for non-bringup teardown; - If callback pretends nothing happened and returns 0 to caller, it may trap system into an in-consisitent/compromised state; Use newly-introduced cpumask_any_and_but() to address all issues above. It eliminates usage of temporary cpumask var in generic way, no matter how the cpumask var is allocated. Suggested-by: Mark Rutland <mark.rutland@xxxxxxx> Signed-off-by: Dawei Li <dawei.li@xxxxxxxxxxxx> --- drivers/perf/arm_cspmu/arm_cspmu.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/perf/arm_cspmu/arm_cspmu.c b/drivers/perf/arm_cspmu/arm_cspmu.c index b9a252272f1e..fd1004251665 100644 --- a/drivers/perf/arm_cspmu/arm_cspmu.c +++ b/drivers/perf/arm_cspmu/arm_cspmu.c @@ -1322,8 +1322,7 @@ static int arm_cspmu_cpu_online(unsigned int cpu, struct hlist_node *node) static int arm_cspmu_cpu_teardown(unsigned int cpu, struct hlist_node *node) { - int dst; - struct cpumask online_supported; + unsigned int dst; struct arm_cspmu *cspmu = hlist_entry_safe(node, struct arm_cspmu, cpuhp_node); @@ -1333,9 +1332,8 @@ static int arm_cspmu_cpu_teardown(unsigned int cpu, struct hlist_node *node) return 0; /* Choose a new CPU to migrate ownership of the PMU to */ - cpumask_and(&online_supported, &cspmu->associated_cpus, - cpu_online_mask); - dst = cpumask_any_but(&online_supported, cpu); + dst = cpumask_any_and_but(&cspmu->associated_cpus, + cpu_online_mask, cpu); if (dst >= nr_cpu_ids) return 0; -- 2.27.0