On Wed, Nov 02, 2022 at 04:06:20PM +0800, Binbin Wu <binbin.wu@xxxxxxxxxxxxxxx> wrote: > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 4b22196cb12c..25c30c8c2d9b 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -12337,16 +12337,23 @@ int kvm_arch_online_cpu(unsigned int cpu, int usage_count) > > int kvm_arch_offline_cpu(unsigned int cpu, int usage_count) > > { > > - if (usage_count) { > > - /* > > - * arch callback kvm_arch_hardware_disable() assumes that > > - * preemption is disabled for historical reason. Disable > > - * preemption until all arch callbacks are fixed. > > - */ > > - preempt_disable(); > > - hardware_disable(NULL); > > - preempt_enable(); > > - } > > + int ret; > > + > > + if (!usage_count) > > + return 0; > > + > > + ret = static_call(kvm_x86_offline_cpu)(); > > Use static_call_cond instead? > Seems the new interface for x86 is only implemented for Intel. Not needed because KVM_X86_OP_OPTIONAL_RET0(offline_cpu) is used. Please remember #define KVM_X86_OP_OPTIONAL_RET0(func) \ static_call_update(kvm_x86_##func, (void *)kvm_x86_ops.func ? : \ (void *)__static_call_return0); Thanks, -- Isaku Yamahata <isaku.yamahata@xxxxxxxxx>