On Tue, May 30 2023 at 09:56, Sean Christopherson wrote: > On Tue, May 30, 2023, Thomas Gleixner wrote: >> On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote: >> > On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote: >> >> The decision to allow parallel bringup of secondary CPUs checks >> >> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use >> >> parallel bootup because accessing the local APIC is intercepted and raises >> >> a #VC or #VE, which cannot be handled at that point. >> >> >> >> The check works correctly, but only for AMD encrypted guests. TDX does not >> >> set that flag. >> >> >> >> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but >> >> definitely works for both AMD and Intel. >> > >> > It boots fine with TDX, but I think it is wrong. cc_get_vendor() will >> > report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think >> > we want it. >> >> Right. Did not think about that. >> >> But the same way is CC_ATTR_GUEST_MEM_ENCRYPT overbroad for AMD. Only >> SEV-ES traps RDMSR if I'm understandig that maze correctly. > > Ya, regular SEV doesn't encrypt register state. That aside. From a semantical POV making this decision about parallel bootup based on some magic CC encryption attribute is questionable. I'm tending to just do the below and make this CC agnostic (except that I couldn't find the right spot for SEV-ES to clear that flag.) Thanks, tglx --- --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -871,5 +871,7 @@ void __init tdx_early_init(void) x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required; x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed; + x86_cpuinit.parallel_bringup = false; + pr_info("Guest detected\n"); } --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -2,6 +2,7 @@ #ifndef _ASM_X86_PLATFORM_H #define _ASM_X86_PLATFORM_H +#include <linux/bits.h> #include <asm/bootparam.h> struct ghcb; @@ -177,11 +178,14 @@ struct x86_init_ops { * struct x86_cpuinit_ops - platform specific cpu hotplug setups * @setup_percpu_clockev: set up the per cpu clock event device * @early_percpu_clock_init: early init of the per cpu clock event device + * @fixup_cpu_id: fixup function for cpuinfo_x86::phys_proc_id + * @parallel_bringup: Parallel bringup control */ struct x86_cpuinit_ops { void (*setup_percpu_clockev)(void); void (*early_percpu_clock_init)(void); void (*fixup_cpu_id)(struct cpuinfo_x86 *c, int node); + bool parallel_bringup; }; struct timespec64; --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -1287,6 +1287,11 @@ bool __init arch_cpuhp_init_parallel_bri return false; } + if (!x86_cpuinit.parallel_bringup) { + pr_info("Parallel CPU startup disabled by the platform\n"); + return false; + } + smpboot_control = STARTUP_READ_APICID; pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control); return true; --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -126,6 +126,7 @@ struct x86_init_ops x86_init __initdata struct x86_cpuinit_ops x86_cpuinit = { .early_percpu_clock_init = x86_init_noop, .setup_percpu_clockev = setup_secondary_APIC_clock, + .parallel_bringup = true, }; static void default_nmi_init(void) { };