From: Kenji Kaneshige <kaneshige.kenji@xxxxxxxxxxxxxx> Currently, NMI interrupt is blindly sent to all the vCPUs when NMI button event happens. This doesn't properly emulate real hardware on which NMI button event triggers LINT1. Because of this, NMI is sent to the processor even when LINT1 is maskied in LVT. For example, this causes the problem that kdump initiated by NMI sometimes doesn't work on KVM, because kdump assumes NMI is masked on CPUs other than CPU0. With this patch, KVM_NMI ioctl is handled as follows. - When in-kernel irqchip is enabled, KVM_NMI ioctl is handled as a request of triggering LINT1 on the processor. LINT1 is emulated in in-kernel irqchip. - When in-kernel irqchip is disabled, KVM_NMI ioctl is handled as a request of injecting NMI to the processor. This assumes LINT1 is already emulated in userland. (laijs) Add KVM_NMI API document Signed-off-by: Kenji Kaneshige <kaneshige.kenji@xxxxxxxxxxxxxx> Tested-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxx> --- Documentation/virtual/kvm/api.txt | 18 ++++++++++++++++++ arch/x86/kvm/irq.h | 1 + arch/x86/kvm/lapic.c | 7 +++++++ arch/x86/kvm/x86.c | 5 ++++- 4 files changed, 30 insertions(+), 1 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index b0e4b9c..3162fc8 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1430,6 +1430,24 @@ is supported; 2 if the processor requires all virtual machines to have an RMA, or 1 if the processor can use an RMA but doesn't require it, because it supports the Virtual RMA (VRMA) facility. +4.64 KVM_NMI + +Capability: KVM_CAP_USER_NMI +Architectures: x86 +Type: vcpu ioctl +Parameters: none +Returns: 0 on success, -1 on error + +This ioctl injects NMI to the vcpu: + + - When in-kernel irqchip is enabled, KVM_NMI ioctl is handled as a + request of triggering LINT1 on the processor. LINT1 is emulated in + in-kernel lapic irqchip. + + - When in-kernel irqchip is disabled, KVM_NMI ioctl is handled as a + request of injecting NMI to the processor. This assumes LINT1 is + already emulated in userland lapic. + 5. The kvm_run structure Application code obtains a pointer to the kvm_run structure by diff --git a/arch/x86/kvm/irq.h b/arch/x86/kvm/irq.h index 53e2d08..0c96315 100644 --- a/arch/x86/kvm/irq.h +++ b/arch/x86/kvm/irq.h @@ -95,6 +95,7 @@ void kvm_pic_reset(struct kvm_kpic_state *s); void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu); void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu); void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu); +void kvm_apic_lint1_deliver(struct kvm_vcpu *vcpu); void __kvm_migrate_apic_timer(struct kvm_vcpu *vcpu); void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu); void __kvm_migrate_timers(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 57dcbd4..87fe36a 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1039,6 +1039,13 @@ void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu) kvm_apic_local_deliver(apic, APIC_LVT0); } +void kvm_apic_lint1_deliver(struct kvm_vcpu *vcpu) +{ + struct kvm_lapic *apic = vcpu->arch.apic; + + kvm_apic_local_deliver(apic, APIC_LVT1); +} + static struct kvm_timer_ops lapic_timer_ops = { .is_periodic = lapic_is_periodic, }; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 84a28ea..615e6a7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2731,7 +2731,10 @@ static int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu) { - kvm_inject_nmi(vcpu); + if (irqchip_in_kernel(vcpu->kvm)) + kvm_apic_lint1_deliver(vcpu); + else + kvm_inject_nmi(vcpu); return 0; } -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html