We use vfio-pci to expose a NIC device to a guest and a packet generator generating time stamped packets to measure latency. The NIC is programmed to generate an interrupt when it receives a packet. qemu cpu thread is bound to a fixed CPU core and the NIC interrupts are bound to the same core. The host CPU supports APIC virtualization. We have observed sometimes an interrupt is delayed for a relatively long time (mini seconds) before being delivered to the guest. It seems there is a small window in the function vcpu_enter_guest in arch/x86/kvm/x86.c, where an interrupt from a device managed by vfio-pci is queued in PIR after PIR has synchronized to VIRR. This will cause the interrupt not delivered to the guest until the next VM exit-entry cycle. Refer to the code snippet below, if a device interrupt arrives after the KVM_REQ_EVENT check block and before local_irq_disable(), the interrupt request will be in PIR but not in VIRR. In a worst case scenario, the interrupt would get lost if another interrupt from the device arrived before a VM exit occurred. /* * KVM_REQ_EVENT is not set when posted interrupts are set by * VT-d hardware, so we have to update RVI unconditionally. */ if (kvm_lapic_enabled(vcpu)) { /* * Update architecture specific hints for APIC * virtual interrupt delivery. */ if (kvm_x86_ops->hwapic_irr_update) kvm_x86_ops->hwapic_irr_update(vcpu, kvm_lapic_find_highest_irr(vcpu)); } if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) { ... ... ... local_irq_disable(); if (vcpu->mode == EXITING_GUEST_MODE || vcpu->requests || need_resched() || signal_pending(current)) { Moving the "if (kvm_lapic_enabled(vcpu))" block to after the "if (vcpu->mode == EXITING_GUEST_MODE ..." block resolved the long interrupt latency issue in my limited testing, but I'm not sure if this is going to break something else.