Sean Christopherson <sean.j.christopherson@xxxxxxxxx> writes: > Gracefully handle faults on VMXON, e.g. #GP due to VMX being disabled by > BIOS, instead of letting the fault crash the system. Now that KVM uses > cpufeatures to query support instead of reading MSR_IA32_FEAT_CTL > directly, it's possible for a bug in a different subsystem to cause KVM > to incorrectly attempt VMXON[*]. Crashing the system is especially > annoying if the system is configured such that hardware_enable() will > be triggered during boot. > > Oppurtunistically rename @addr to @vmxon_pointer and use a named param > to reference it in the inline assembly. > > Print 0xdeadbeef in the ultra-"rare" case that reading MSR_IA32_FEAT_CTL > also faults. > > [*] https://lkml.kernel.org/r/20200226231615.13664-1-sean.j.christopherson@xxxxxxxxx > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > --- > arch/x86/kvm/vmx/vmx.c | 24 +++++++++++++++++++++--- > 1 file changed, 21 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 07634caa560d..3aba51d782e2 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -2218,18 +2218,33 @@ static __init int vmx_disabled_by_bios(void) > !boot_cpu_has(X86_FEATURE_VMX); > } > > -static void kvm_cpu_vmxon(u64 addr) > +static int kvm_cpu_vmxon(u64 vmxon_pointer) > { > + u64 msr; > + > cr4_set_bits(X86_CR4_VMXE); > intel_pt_handle_vmx(1); > > - asm volatile ("vmxon %0" : : "m"(addr)); > + asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t" > + _ASM_EXTABLE(1b, %l[fault]) > + : : [vmxon_pointer] "m"(vmxon_pointer) > + : : fault); > + return 0; > + > +fault: > + WARN_ONCE(1, "VMXON faulted, MSR_IA32_FEAT_CTL (0x3a) = 0x%llx\n", > + rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr) ? 0xdeadbeef : msr); We seem to be acting under an assumption that the fault is (likelt) caused my disabled VMX feature but afaics the fault can be caused by passing a bogus pointer too (but that would be a KVM bug, of course). > + intel_pt_handle_vmx(0); > + cr4_clear_bits(X86_CR4_VMXE); > + > + return -EFAULT; > } > > static int hardware_enable(void) > { > int cpu = raw_smp_processor_id(); > u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); > + int r; > > if (cr4_read_shadow() & X86_CR4_VMXE) > return -EBUSY; > @@ -2246,7 +2261,10 @@ static int hardware_enable(void) > INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); > spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); > > - kvm_cpu_vmxon(phys_addr); > + r = kvm_cpu_vmxon(phys_addr); > + if (r) > + return r; > + > if (enable_ept) > ept_sync_global(); Reviewed-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> -- Vitaly