On Wed, Nov 02, 2022 at 11:18:27PM +0000, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > Non-x86 folks, please test on hardware when possible. I made a _lot_ of > mistakes when moving code around. Thankfully, x86 was the trickiest code > to deal with, and I'm fairly confident that I found all the bugs I > introduced via testing. But the number of mistakes I made and found on > x86 makes me more than a bit worried that I screwed something up in other > arch code. > > This is a continuation of Chao's series to do x86 CPU compatibility checks > during virtualization hardware enabling[1], and of Isaku's series to try > and clean up the hardware enabling paths so that x86 (Intel specifically) > can temporarily enable hardware during module initialization without > causing undue pain for other architectures[2]. It also includes one patch > from another mini-series from Isaku that provides the less controversial > patches[3]. > > The main theme of this series is to kill off kvm_arch_init(), > kvm_arch_hardware_(un)setup(), and kvm_arch_check_processor_compat(), which > all originated in x86 code from way back when, and needlessly complicate > both common KVM code and architecture code. E.g. many architectures don't > mark functions/data as __init/__ro_after_init purely because kvm_init() > isn't marked __init to support x86's separate vendor modules. > > The idea/hope is that with those hooks gone (moved to arch code), it will > be easier for x86 (and other architectures) to modify their module init > sequences as needed without having to fight common KVM code. E.g. I'm > hoping that ARM can build on this to simplify its hardware enabling logic, > especially the pKVM side of things. > > There are bug fixes throughout this series. They are more scattered than > I would usually prefer, but getting the sequencing correct was a gigantic > pain for many of the x86 fixes due to needing to fix common code in order > for the x86 fix to have any meaning. And while the bugs are often fatal, > they aren't all that interesting for most users as they either require a > malicious admin or broken hardware, i.e. aren't likely to be encountered > by the vast majority of KVM users. So unless someone _really_ wants a > particular fix isolated for backporting, I'm not planning on shuffling > patches. > > Tested on x86. Lightly tested on arm64. Compile tested only on all other > architectures. Thanks for the patch series. I the rebased TDX KVM patch series and it worked. Since cpu offline needs to be rejected in some cases(To keep at least one cpu on a package), arch hook for cpu offline is needed. I can keep it in TDX KVM patch series. diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 23c0f4bc63f1..ef7bcb845d42 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -17,6 +17,7 @@ BUILD_BUG_ON(1) KVM_X86_OP(hardware_enable) KVM_X86_OP(hardware_disable) KVM_X86_OP(hardware_unsetup) +KVM_X86_OP_OPTIONAL_RET0(offline_cpu) KVM_X86_OP(has_emulated_msr) KVM_X86_OP(vcpu_after_set_cpuid) KVM_X86_OP(is_vm_type_supported) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 496c7c6eaff9..c420409aa96f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1468,6 +1468,7 @@ struct kvm_x86_ops { int (*hardware_enable)(void); void (*hardware_disable)(void); void (*hardware_unsetup)(void); + int (*offline_cpu)(void); bool (*has_emulated_msr)(struct kvm *kvm, u32 index); void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2ed5a017f7bc..17c5d6a76c93 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12039,6 +12039,11 @@ void kvm_arch_hardware_disable(void) drop_user_return_notifiers(); } +int kvm_arch_offline_cpu(unsigned int cpu) +{ + return static_call(kvm_x86_offline_cpu)(); +} + bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu) { return vcpu->kvm->arch.bsp_vcpu_id == vcpu->vcpu_id; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 620489b9aa93..4df79443fd11 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1460,6 +1460,7 @@ static inline void kvm_create_vcpu_debugfs(struct kvm_vcpu *vcpu) {} int kvm_arch_hardware_enable(void); void kvm_arch_hardware_disable(void); #endif +int kvm_arch_offline_cpu(unsigned int cpu); int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f6b6dcedaa0a..f770fdc662d0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5396,16 +5396,24 @@ static void hardware_disable_nolock(void *junk) __this_cpu_write(hardware_enabled, false); } +__weak int kvm_arch_offline_cpu(unsigned int cpu) +{ + return 0; +} + static int kvm_offline_cpu(unsigned int cpu) { + int r = 0; + mutex_lock(&kvm_lock); - if (kvm_usage_count) { + r = kvm_arch_offline_cpu(cpu); + if (!r && kvm_usage_count) { preempt_disable(); hardware_disable_nolock(NULL); preempt_enable(); } mutex_unlock(&kvm_lock); - return 0; + return r; } static void hardware_disable_all_nolock(void) -- Isaku Yamahata <isaku.yamahata@xxxxxxxxx> _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm