Avi Kivity wrote:
Cam Macdonell wrote:
If my understanding is correct both the VM's who wants to communicate
would gives this path in the command line with one of them specifying
as "server".
Exactly, the one with the "server" in the parameter list will wait for
a connection before booting.
hm, we may be able to eliminate the server from the fast path, at the
cost of some complexity.
When a guest connects to the server, the server creates an eventfd and
passes using SCM_RIGHTS to all other connected guests. The server also
passes the eventfds of currently connected guests to the new guest.
From now on, the server does not participate in anything; when a quest
wants to send an interrupt to one or more other guests, its qemu just
writes to the eventfds() of the corresponding guests; their qemus will
inject the interrupt, without any server involvement.
Now, anyone who has been paying attention will have their alarms going
off at the word eventfd. And yes, if the host supports irqfd, the
various qemus can associate those eventfds with an irq and pretty much
forget about them. When a qemu triggers an irqfd, the interrupt will be
injected directly without the target qemu's involvement.
I like it.
That certainly sounds like the right direction for multi-VM setup. I'm
currently working on the shmem PCI card server discussed in the first
patch's thread to support broadcast and multicast which will now be
simpler if qemu handles the *casting.
My usual noob questions: Do I need to run Greg's tree on the host for
the necessary irqfd/eventfd suppport? Are there any examples to work
from aside from Greg's unit tests?
Thanks,
Cam
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html