Il 07/03/2014 19:19, Jan Kiszka ha scritto:
On 2014-03-07 18:28, Jan Kiszka wrote:
On 2014-03-07 17:46, Paolo Bonzini wrote:
Il 07/03/2014 17:29, Jan Kiszka ha scritto:
On 2014-03-07 16:44, Paolo Bonzini wrote:
With this patch do we still need
if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
/*
* We get here if vmx_interrupt_allowed() said we can't
* inject to L1 now because L2 must run. The caller will have
* to make L2 exit right after entry, so we can inject to L1
* more promptly.
*/
return -EBUSY;
in enable_irq_window? If not, enable_nmi_window and enable_irq_window
can both return void.
I don't see right now why this should have changed. We still cannot
interrupt vmlaunch/vmresume.
But then shouldn't the ame be true for enable_nmi_window? It doesn't
check is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu).
Yes, that seems wrong now. But I need to think this through again, why
we may have excluded NMIs from this test so far.
Since check_nested_events has already returned -EBUSY, perhaps the
following:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index fda1028..df320e9 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4522,15 +4522,6 @@ static int enable_irq_window(struct kvm_vcpu *vcpu)
{
u32 cpu_based_vm_exec_control;
- if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
- /*
- * We get here if vmx_interrupt_allowed() said we can't
- * inject to L1 now because L2 must run. The caller will have
- * to make L2 exit right after entry, so we can inject to L1
- * more promptly.
- */
- return -EBUSY;
-
cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_INTR_PENDING;
vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a03d611..83c2df5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5970,13 +5970,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
inject_pending_event(vcpu);
- if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events)
- req_immediate_exit |=
- kvm_x86_ops->check_nested_events(vcpu,
- req_int_win) != 0;
+ if (is_guest_mode(vcpu) &&
+ kvm_x86_ops->check_nested_events &&
+ kvm_x86_ops->check_nested_events(vcpu, req_int_win) != 0)
+ req_immediate_exit = true;
/* enable NMI/IRQ window open exits if needed */
- if (vcpu->arch.nmi_pending)
+ else if (vcpu->arch.nmi_pending)
req_immediate_exit |=
kvm_x86_ops->enable_nmi_window(vcpu) != 0;
else if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
Hmm, looks reasonable.
Also on second thought. I can give this hunk some test cycles here, just
in case.
Thanks.
Reading through my code again, I'm now wondering why I added
check_nested_events to both inject_pending_event and vcpu_enter_guest.
The former seems redundant, only vcpu_enter_guest calls
inject_pending_event. I guess I forgot a cleanup here.
I can fold in your changes when I resend for the other cleanup.
As you prefer, I can also post it as a separate patch (my changes above
do not have the int->void change).
I had pushed your patches already to kvm/queue. You can post v4
relative to kvm/next.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html