On a x86 system under test with 1780 CPUs, topology_span_sane() takes around 8 seconds cumulatively for all the iterations. It is an expensive operation which does the sanity of non-NUMA topology masks. CPU topology is not something which changes very frequently hence make this check optional for the systems where the topology is trusted and need faster bootup. Restrict this to sched_verbose kernel cmdline option so that this penalty can be avoided for the systems who wants to avoid it. Cc: stable@xxxxxxxxxxxxxxx Fixes: ccf74128d66c ("sched/topology: Assert non-NUMA topology masks don't (partially) overlap") Signed-off-by: Saurabh Sengar <ssengar@xxxxxxxxxxxxxxxxxxx> --- [V2] - Use kernel cmdline param instead of compile time flag. kernel/sched/topology.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 9748a4c8d668..4ca63bff321d 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2363,6 +2363,13 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl, { int i = cpu + 1; + /* Skip the topology sanity check for non-debug, as it is a time-consuming operatin */ + if (!sched_debug_verbose) { + pr_info_once("%s: Skipping topology span sanity check. Use `sched_verbose` boot parameter to enable it.\n", + __func__); + return true; + } + /* NUMA levels are allowed to overlap */ if (tl->flags & SDTL_OVERLAP) return true; -- 2.43.0