On Wed, May 22, 2019 at 03:02:38PM +0200, Johannes Berg wrote: > Hi, > > While my main interest is mostly in UML right now [1] I've CC'ed the > qemu and virtualization lists because something similar might actually > apply to other types of virtualization. > > I'm thinking about adding virt-io support to UML, but the tricky part is > that while I want to use the virt-io basics (because it's a nice > interface from the 'inside'), I don't actually want the stock drivers > that are part of the kernel now (like virtio-net etc.) but rather > something that integrates with wifi (probably building on hwsim). > > The 'inside' interfaces aren't really a problem - just have a specific > device ID for this, and then write a normal virtio kernel driver for it. > > The 'outside' interfaces are where my thinking breaks down right now. > > Looking at lkl, the outside is just all implemented in lkl as code that > gets linked to the library, so in UML terms it'd just be extra 'outside' > code like the timer handling or other netdev stuff we have today. > Looking at qemu, it's of course also implemented there, and then > interfaces with the real network, console abstraction, etc. > > However, like I said above, I really need something very custom and not > likely to make it upstream to any project (because what point is that if > you cannot connect to the rest of the environment I'm building), so I'm > thinking that perhaps it should be possible to write an abstract > 'outside' that lets you interact with it really from out-of-process? > Perhaps through some kind of shared memory segment? I think that gets > tricky with virt-io doing DMA (I think it does?) though, so that part > would have to be implemented directly and not out-of-process? > > But really that's why I'm asking - is there a better way than to just > link the device-side virt-io code into the same binary (be it lkl lib, > uml binary, qemu binary)? Hi Johannes, Check out vhost-user. It's a protocol for running a subset of a VIRTIO device's emulation in a separate process (usually just the data plane with the PCI emulation and other configuration/setup still handled by QEMU). vhost-user uses a UNIX domain socket to pass file descriptors to shared memory regions. This way the vhost-user device backend process has access to guest RAM. This would be quite different for UML since my understanding is you don't have guest RAM but actual host Linux processes, but vhost-user might still give you ideas: https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/interop/vhost-user.rst;hb=HEAD Stefan
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization