On 2021/4/27 07:09, Lai Jiangshan wrote:
From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx> In VMX, the NMI handler needs to be invoked after NMI VM-Exit. Before the commit 1a5488ef0dcf6 ("KVM: VMX: Invoke NMI handler via indirect call instead of INTn"), the work is done by INTn ("int $2"). But INTn microcode is relatively expensive, so the commit reworked NMI VM-Exit handling to invoke the kernel handler by function call. And INTn doesn't set the NMI blocked flag required by the linux kernel NMI entry. So moving away from INTn are very reasonable. Yet some details were missed. After the said commit applied, the NMI entry pointer is fetched from the IDT table and called from the kernel stack. But the NMI entry pointer installed on the IDT table is asm_exc_nmi() which expects to be invoked on the IST stack by the ISA. And it relies on the "NMI executing" variable on the IST stack to work correctly. When it is unexpectedly called from the kernel stack, the RSP-located "NMI executing" variable is also on the kernel stack and is "uninitialized" and can cause the NMI entry to run in the wrong way. So we should not used the NMI entry installed on the IDT table. Rather, we should use the NMI entry allowed to be used on the kernel stack which is asm_noist_exc_nmi() which is also used for XENPV and early booting.
The problem can be tested by the following testing-patch. 1) the testing-patch can be applied without conflict before this patch 3. And it shows the problem that the NMI is missed in the case. 2) you need to manually copy the same logic of this testing-patch to verify this patch 3. And it shows that the problem is fixed. 3) the only one line of code in vmenter.S just emulates the situation that a "uninitialized" garbage in the kernel stack happens to be 1 and it happens to be at the same location of the RSP-located "NMI executing" variable. diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 3a6461694fc2..32096049c2a2 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -316,6 +316,7 @@ SYM_FUNC_START(vmx_do_interrupt_nmi_irqoff) #endif pushf push $__KERNEL_CS + movq $1, -24(%rsp) // "NMI executing": 1 = nested, non-1 = not-nested CALL_NOSPEC _ASM_ARG1 /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bcbf0d2139e9..9509d2edd50d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6416,8 +6416,12 @@ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx) else if (is_machine_check(intr_info)) kvm_machine_check(); /* We need to handle NMIs before interrupts are enabled */ - else if (is_nmi(intr_info)) + else if (is_nmi(intr_info)) { + unsigned long count = this_cpu_read(irq_stat.__nmi_count); handle_interrupt_nmi_irqoff(&vmx->vcpu, intr_info); + if (count == this_cpu_read(irq_stat.__nmi_count)) + pr_info("kvm nmi miss\n"); + } } static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)