As we now inject the timer interrupt when we're about to enter the guest, it makes a lot more sense to make sure this happens before the vgic code queues the pending interrupts. Otherwise, we get the interrupt on the following exit, which is not great for latency (and leads to all kind of bizarre issues when using with active interrupts at the HW level). Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx> Reviewed-by: Alex Bennée <alex.bennee@xxxxxxxxxx> Reviewed-by: Christoffer Dall <christoffer.dall@xxxxxxxxxx> --- arch/arm/kvm/arm.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 9ce5cf0..1141d21 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -523,8 +523,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (vcpu->arch.pause) vcpu_pause(vcpu); - kvm_vgic_flush_hwstate(vcpu); kvm_timer_flush_hwstate(vcpu); + kvm_vgic_flush_hwstate(vcpu); preempt_disable(); local_irq_disable(); @@ -540,8 +540,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) { local_irq_enable(); preempt_enable(); - kvm_timer_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu); + kvm_timer_sync_hwstate(vcpu); continue; } @@ -587,9 +587,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); preempt_enable(); - - kvm_timer_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu); + kvm_timer_sync_hwstate(vcpu); ret = handle_exit(vcpu, run, ret); } -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html