On Mon, May 22 2023 at 20:45, Mark Brown wrote: > On Fri, May 12, 2023 at 11:07:50PM +0200, Thomas Gleixner wrote: >> From: Thomas Gleixner <tglx@xxxxxxxxxxxxx> >> >> There is often significant latency in the early stages of CPU bringup, and >> time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and >> then waiting for it to respond before moving on to the next. >> >> Allow a platform to enable parallel setup which brings all to be onlined >> CPUs up to the CPUHP_BP_KICK_AP state. While this state advancement on the >> control CPU (BP) is single-threaded the important part is the last state >> CPUHP_BP_KICK_AP which wakes the to be onlined CPUs up. > > We're seeing a regression on ThunderX2 systems with 256 CPUs with an > arm64 defconfig running -next which I've bisected to this patch. Before > this commit we bring up 256 CPUs: > > [ 29.137225] GICv3: CPU254: found redistributor 11e03 region 1:0x0000000441f60000 > [ 29.137238] GICv3: CPU254: using allocated LPI pending table @0x00000008818e0000 > [ 29.137305] CPU254: Booted secondary processor 0x0000011e03 [0x431f0af1] > [ 29.292421] Detected PIPT I-cache on CPU255 > [ 29.292635] GICv3: CPU255: found redistributor 11f03 region 1:0x0000000441fe0000 > [ 29.292648] GICv3: CPU255: using allocated LPI pending table @0x00000008818f0000 > [ 29.292715] CPU255: Booted secondary processor 0x0000011f03 [0x431f0af1] > [ 29.292859] smp: Brought up 2 nodes, 256 CPUs > [ 29.292864] SMP: Total of 256 processors activated. > > but after we only bring up 255, missing the 256th: > > [ 29.165888] GICv3: CPU254: found redistributor 11e03 region 1:0x0000000441f60000 > [ 29.165901] GICv3: CPU254: using allocated LPI pending table @0x00000008818e0000 > [ 29.165968] CPU254: Booted secondary processor 0x0000011e03 [0x431f0af1] > [ 29.166120] smp: Brought up 2 nodes, 255 CPUs > [ 29.166125] SMP: Total of 255 processors activated. > > I can't immediately see an issue with the patch itself, for systems > without CONFIG_HOTPLUG_PARALLEL=y it should replace the loop over > cpu_present_mask done by for_each_present_cpu() with an open coded one. > I didn't check the rest of the series yet. > > The KernelCI bisection bot also isolated an issue on Odroid XU3 (a 32 > bit arm system) with the final CPU of the 8 on the system not coming up > to the same patch: > > https://groups.io/g/kernelci-results/message/42480?p=%2C%2C%2C20%2C0%2C0%2C0%3A%3Acreated%2C0%2Call-cpus%2C20%2C2%2C0%2C99054444 > > Other boards I've checked (including some with multiple CPU clusters) > seem to be bringing up all their CPUs so it doesn't seem to just be > general breakage. That does not make any sense at all and my tired brain does not help either. Can you please apply the below debug patch and provide the output? Thanks, tglx --- diff --git a/kernel/cpu.c b/kernel/cpu.c index 005f863a3d2b..90a9b2ae8391 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1767,13 +1767,20 @@ static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int n { unsigned int cpu; + pr_info("Bringup max %u CPUs to %d\n", ncpus, target); + for_each_cpu(cpu, mask) { struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu); + int ret; + + pr_info("Bringup CPU%u left %u\n", cpu, ncpus); if (!--ncpus) break; - if (cpu_up(cpu, target) && can_rollback_cpu(st)) { + ret = cpu_up(cpu, target); + pr_info("Bringup CPU%u %d\n", cpu, ret); + if (ret && can_rollback_cpu(st)) { /* * If this failed then cpu_up() might have only * rolled back to CPUHP_BP_KICK_AP for the final