On a x86 system under test with 1780 CPUs, topology_span_sane() takes around 8 seconds cumulatively for all the iterations. It is an expensive operation which does the sanity of non-NUMA topology masks. CPU topology is not something which changes very frequently hence make this check optional for the systems where the topology is trusted and need faster bootup. Restrict this to SCHED_DEBUG builds so that this penalty can be avoided for the systems who wants to avoid it. Fixes: ccf74128d66c ("sched/topology: Assert non-NUMA topology masks don't (partially) overlap") Signed-off-by: Saurabh Sengar <ssengar@xxxxxxxxxxxxxxxxxxx> --- kernel/sched/topology.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 9748a4c8d668..dacc8c6f978b 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2354,6 +2354,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve return sd; } +#ifdef CONFIG_SCHED_DEBUG /* * Ensure topology masks are sane, i.e. there are no conflicts (overlaps) for * any two given CPUs at this (non-NUMA) topology level. @@ -2387,6 +2388,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl, return true; } +#endif /* * Build sched domains for a given set of CPUs and attach the sched domains @@ -2417,8 +2419,10 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att sd = NULL; for_each_sd_topology(tl) { +#ifdef CONFIG_SCHED_DEBUG if (WARN_ON(!topology_span_sane(tl, cpu_map, i))) goto error; +#endif sd = build_sched_domain(tl, cpu_map, attr, sd, i); -- 2.43.0