Hi Michael, I have looked into the way irqfd with msix mask notifiers works. From what I can tell, the guest notifiers are enabled by vhost net in order to hook up irqfds for the virtqueues. MSIX allows vectors to be masked so there is a mmio write notifier in qemu-kvm to toggle the irqfd and its QEMU fd handler when the guest toggles the MSIX mask. The irqfd is disabled but stays open as an eventfd while masked. That means masking/unmasking the vector does not close/open the eventfd file descriptor itself. I'm having trouble finding a direct parallel to virtio-ioeventfd here. We always want to have an ioeventfd per virtqueue unless the host kernel does not support >6 ioeventfds per VM. When vhost sets the host notifier we want to remove the QEMU fd handler and allow vhost to use the event notifier's fd as it wants. When vhost clears the host notifier we want to add the QEMU fd handler again (unless the kernel does not support >6 ioeventfds per VM). I think hooking in at the virtio-pci.c level instead of virtio.c is possible but we're still going to have the same state transitions. I hope it can be done without adding per-virtqueue variables that track state. Before I go down this route, is there something I've missed and do you think this approach will be better? Stefan -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html