On 17/10/2016 08:47, Wang, Wei W wrote: > Please let me elaborate on the two possible solutions based on the > existing eventfd mechanism and the new hypercall mechanism - how can > we use them to achieve the notification from virtio1 driver to > virtio2 driver (across world contexts). We can't directly deliver > interrupts from virtio1 driver to virtio2 driver, so here, for both > solutions, we need a trampoline - the host. A uuid field is necessary > to be added to the kvm struct, so that the trampoline can know who is > who. This is already problematic. KVM tries really, really hard to avoid any global state across VMs. If you define a global UUID, you'll also have to design how to make it safe against multiple users of KVM, and how it interacts with features like user namespace. And you'll also have to explain it to me, since I'm not at all a security expert. That may be harder than the design. :) > Generally, two steps are needed: > Step1: virtio1's driver sends the interrupt request to the trampoline; > Step2: the trampoline sends the interrupt request to virtio2's driver. > > *Solution 1. eventfd > Step1: achieved by virtio1's ioeventfd; > Step2: achieved by virtio2's irqfd. > > In the setup phase, the trampoline makes a connection between > virtio1's ioeventfd and virtio2's irqfd. So, in this solution, we would > need a host kernel module to do the trampoline work - connection setup > and interrupt request delivery. No, you don't! The point is that you can pass the same file descriptor to KVM_IOEVENTFD and KVM_IRQFD. The virtio-net VM can pass the irqfd to the vhost-net VM, via the vhost socket. This is exactly how things work for vhost-user. vhost-pci can additionally use the received file descriptor as the ioeventfd. >> No, the hypercall will not be accepted in any form. The established protocols >> for communication between KVM and the outside world, including other KVM >> instances, are MMIO write and irqfd. > > Could you please give more details about why hypercall is not > welcomed, given the fact that it has already been implemented in KVM for > some usages? Thanks. Well, hypercalls aren't really that common in KVM. :) There are exactly two, and one of them does nothing except force a vmexit. Anyway, here are four good reasons why this hypercall is not welcome: 1) irqfd seems to be fast enough for VFIO and existing vhost backends, so it should be fast enough for vhost-pci as well; 2) if irqfd is not fast enough, optimizing it would benefit VFIO and existing vhost backends, so we should first look into that anyway; 3) vhost-pci's host part should be basically a vhost-user backend implemented by QEMU. Any deviation from that should be considered very carefully; 4) vhost-pci's first use case should be with DPDK, which does polling anyway, not interrupts. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html