Gregory Haskins wrote:
Oh yes. But don't call it dynhc - like Chris says it's the wrong
semantic.
Since we want to connect it to an eventfd, call it HC_NOTIFY or
HC_EVENT or something along these lines. You won't be able to pass
any data, but that's fine. Registers are saved to memory anyway.
Ok, but how would you access the registers since you would presumably
only be getting a waitq::func callback on the eventfd. Or were you
saying that more data, if required, is saved in a side-band memory
location? I can see the latter working.
Yeah. You basically have that side-band in vbus shmem (or the virtio ring).
I can't wrap my head around
the former.
I only meant that registers aren't faster than memory, since they are
just another memory location.
In fact registers are accessed through a function call (not that that
takes any time these days).
Just to make sure we have everything plumbed down, here's how I see
things working out (using qemu and virtio, use sed to taste):
1. qemu starts up, sets up the VM
2. qemu creates virtio-net-server
3. qemu allocates six eventfds: irq, stopirq, notify (one set for tx
ring, one set for rx ring)
4. qemu connects the six eventfd to the data-available,
data-not-available, and kick ports of virtio-net-server
5. the guest starts up and configures virtio-net in pci pin mode
6. qemu notices and decides it will manage interrupts in user space
since this is complicated (shared level triggered interrupts)
7. the guest OS boots, loads device driver
8. device driver switches virtio-net to msix mode
9. qemu notices, plumbs the irq fds as msix interrupts, plumbs the
notify fds as notifyfd
10. look ma, no hands.
Under the hood, the following takes place.
kvm wires the irqfds to schedule a work item which fires the
interrupt. One day the kvm developers get their act together and
change it to inject the interrupt directly when the irqfd is signalled
(which could be from the net softirq or somewhere similarly nasty).
virtio-net-server wires notifyfd according to its liking. It may
schedule a thread, or it may execute directly.
And they all lived happily ever after.
Ack. I hope when its all said and done I can convince you that the
framework to code up those virtio backends in the kernel is vbus ;)
If vbus doesn't bring significant performance advantages, I'll prefer
virtio because of existing investment.
But
even if not, this should provide enough plumbing that we can all coexist
together peacefully.
Yes, vbus and virtio can compete on their merits without bias from some
maintainer getting in the way.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html