On Tuesday 15 Oct 2019 at 11:29:56 (+0100), Valentin Schneider wrote: > While the static key is correctly initialized as being disabled, it will > remain forever enabled once turned on. This means that if we start with an > asymmetric system and hotplug out enough CPUs to end up with an SMP system, > the static key will remain set - which is obviously wrong. We should detect > this and turn off things like misfit migration and capacity aware wakeups. > > As Quentin pointed out, having separate root domains makes this slightly > trickier. We could have exclusive cpusets that create an SMP island - IOW, > the domains within this root domain will not see any asymmetry. This means > we need to count how many asymmetric root domains we have. > > Change the simple key enablement to an increment, and decrement the key > counter when destroying domains that cover asymmetric CPUs. > > Cc: <stable@xxxxxxxxxxxxxxx> > Fixes: df054e8445a4 ("sched/topology: Add static_key for asymmetric CPU capacity optimizations") > Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx> > --- > Changes since v1: > > Use static_branch_{inc,dec} rather than enable/disable > --- > kernel/sched/topology.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index b5667a273bf6..79944e969bcf 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -2026,7 +2026,7 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att > rcu_read_unlock(); > > if (has_asym) > - static_branch_enable_cpuslocked(&sched_asym_cpucapacity); > + static_branch_inc_cpuslocked(&sched_asym_cpucapacity); > > if (rq && sched_debug_enabled) { > pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n", > @@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map) > int i; > > rcu_read_lock(); > + > + if (static_key_enabled(&sched_asym_cpucapacity)) { > + unsigned int cpu = cpumask_any(cpu_map); > + > + if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu))) > + static_branch_dec_cpuslocked(&sched_asym_cpucapacity); Lockdep should scream for this :) > + } > + > for_each_cpu(i, cpu_map) > cpu_attach_domain(NULL, &def_root_domain, i); > + > rcu_read_unlock(); > } > > -- > 2.22.0 >