On Mon, Jul 10, 2023 at 11:04:08AM -0700, Sean Christopherson wrote: > On Mon, Jul 03, 2023, Marc Zyngier wrote: > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index aaeae1145359..a28c4ffe4932 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -1894,8 +1894,17 @@ static void _kvm_arch_hardware_enable(void *discard) > > > > int kvm_arch_hardware_enable(void) > > { > > - int was_enabled = __this_cpu_read(kvm_arm_hardware_enabled); > > + int was_enabled; > > > > + /* > > + * Most calls to this function are made with migration > > + * disabled, but not with preemption disabled. The former is > > + * enough to ensure correctness, but most of the helpers > > + * expect the later and will throw a tantrum otherwise. > > + */ > > + preempt_disable(); > > + > > + was_enabled = __this_cpu_read(kvm_arm_hardware_enabled); > > IMO, this_cpu_has_cap() is at fault. Who ever said otherwise? > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index 7d7128c65161..b862477de2ce 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -3193,7 +3193,9 @@ static void __init setup_boot_cpu_capabilities(void) > > bool this_cpu_has_cap(unsigned int n) > { > - if (!WARN_ON(preemptible()) && n < ARM64_NCAPS) { > + __this_cpu_preempt_check("has_cap"); > + > + if (n < ARM64_NCAPS) { This is likely sufficient, but to Marc's point we have !preemptible() checks littered about, it just so happens that this_cpu_has_cap() is the first to get called. We need to make sure there aren't any other checks that'd break under hotplug. While I'd normally like to see the 'right' fix fully fleshed out for something like this, the bug is ugly enough where I'd rather take a hack for the time being. -- Thanks, Oliver