+Chao Gao On Thu, Mar 31, 2022, Isaku Yamahata wrote: > On Thu, Mar 31, 2022 at 12:03:15AM +0000, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Mon, Mar 14, 2022, Isaku Yamahata wrote: > > > - VMXON on all pCPUs: The TDX module initialization requires to enable VMX > > > (VMXON) on all present pCPUs. vmx_hardware_enable() which is called on creating > > > guest does it. It naturally fits with the TDX module initialization at creating > > > first TD. I wanted to avoid code to enable VMXON on loading the kvm_intel.ko. > > > > That's a solvable problem, though making it work without exporting hardware_enable_all() > > could get messy. > > Could you please explain any reason why it's bad idea to export it? I'd really prefer to keep the hardware enable/disable logic internal to kvm_main.c so that all architectures share a common flow, and so that kvm_main.c is the sole owner. I'm worried that exposing the helper will lead to other arch/vendor usage, and that will end up with what is effectively duplicate flows. Deduplicating arch code into generic KVM is usually very difficult. This might also be a good opportunity to make KVM slightly more robust. Ooh, and we can kill two birds with one stone. There's an in-flight series to add compatibility checks to hotplug[*]. But rather than special case hotplug, what if we instead do hardware enable/disable during module load, and move the compatibility check into the hardware_enable path? That fixes the hotplug issue, gives TDX a window for running post-VMXON code in kvm_init(), and makes the broadcast IPI less wasteful on architectures that don't have compatiblity checks. I'm thinking something like this, maybe as a modificatyion to patch 6 in Chao's series, or more likely as a patch 7 so that the hotplug compat checks still get in even if the early hardware enable doesn't work on all architectures for some reason. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 69c318fdff61..c6572a056072 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4838,8 +4838,13 @@ static void hardware_enable_nolock(void *junk) cpumask_set_cpu(cpu, cpus_hardware_enabled); + r = kvm_arch_check_processor_compat(); + if (r) + goto out; + r = kvm_arch_hardware_enable(); +out: if (r) { cpumask_clear_cpu(cpu, cpus_hardware_enabled); atomic_inc(&hardware_enable_failed); @@ -5636,18 +5641,6 @@ void kvm_unregister_perf_callbacks(void) } #endif -struct kvm_cpu_compat_check { - void *opaque; - int *ret; -}; - -static void check_processor_compat(void *data) -{ - struct kvm_cpu_compat_check *c = data; - - *c->ret = kvm_arch_check_processor_compat(c->opaque); -} - int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, struct module *module) { @@ -5679,13 +5672,13 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, if (r < 0) goto out_free_1; - c.ret = &r; - c.opaque = opaque; - for_each_online_cpu(cpu) { - smp_call_function_single(cpu, check_processor_compat, &c, 1); - if (r < 0) - goto out_free_2; - } + r = hardware_enable_all(); + if (r) + goto out_free_2; + + kvm_arch_post_hardware_enable_setup(); + + hardware_disable_all(); r = cpuhp_setup_state_nocalls(CPUHP_AP_KVM_STARTING, "kvm/cpu:starting", kvm_starting_cpu, kvm_dying_cpu); [*] https://lore.kernel.org/all/20211227081515.2088920-7-chao.gao@xxxxxxxxx