Virtiio_mmio attempts to mimic the layout of some control registers from
virtio_pci. These registers, in particular VIRTIO_MMIO_QUEUE_SEL and
VIRTIO_PCI_QUEUE_SEL,
are active in nature, and not just passive like a normal memory
location. Thus, the host side must react immediately upon write of
these registers to map some other registers (queue address, size, etc)
to queue-specific locations. This is just not possible for mmio, and, I
would argue, not desirable for PCI either.
Because the queue selector register doesn't work in mmio, it is clear
that only single queue virtio devices can work. This means no
virtio_net - I've seen a few messages
complaining that it doesn't work but nothing so far on why.
It seems from some messages back in March that there is a register
re-layout in the works for virtio_pci. I think that virtio_pci could
become just one of the
various ways to configure a virtio_mmio device and there would no need
for any "registers", just memory locations acting like memory. The one
gotcha is in
figuring out the kick/notify mechanism for the guest to notify the host
when there is work on a queue. For notify, using a hypervisor call
could unify the pci and mmio
cases, but comes with the cost of leaving the pure pci domain.
I got into this code because I am looking at the possibility of using an
off the shelf embedded processor sitting on a PCIe port to emulate the
virtio pci interface. The
notion of active registers makes this a non-starter, whereas if there
was a purely memory based system like mmio (with mq fixes), a real PCI
device could easily emulate it.
Excepting, of course, whatever the notify mechanism is. If it were
hypercall based, then the hypervisor could call a transport or device
specific way of notifying and a small
notify driver could poke the PCI device is some way.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html