Jan Kiszka <jan.kiszka@xxxxxxxxxxx> writes: > On 28.02.20 11:30, Jan Kiszka wrote: >> On 28.02.20 11:16, Alex Bennée wrote: >>> Hi, >>> <snip> >>> I believe there has been some development work for supporting VIRTIO on >>> Xen although it seems to have stalled according to: >>> >>> https://wiki.xenproject.org/wiki/Virtio_On_Xen >>> >>> Recently at KVM Forum there was Jan's talk about Inter-VM shared memory >>> which proposed ivshmemv2 as a VIRTIO transport: >>> >>> https://events19.linuxfoundation.org/events/kvm-forum-2019/program/schedule/ >>> >>> >>> As I understood it this would allow Xen (and other hypervisors) a simple >>> way to be able to carry virtio traffic between guest and end point. > > And to clarify the scope of this effort: virtio-over-ivshmem is not > the fastest option to offer virtio to a guest (static "DMA" window), > but it is the simplest one from the hypervisor PoV and, thus, also > likely the easiest one to argue over when it comes to security and > safety. So to drill down on this is this a particular problem with type-1 hypervisors? It seems to me any KVM-like run loop trivially supports a range of virtio devices by virtue of trapping accesses to the signalling area of a virtqueue and allowing the VMM to handle the transaction which ever way it sees fit. I've not quite understood the way Xen interfaces to QEMU aside from it's different to everything else. More over it seems the type-1 hypervisors are more interested in providing better isolation between segments of a system whereas VIRTIO currently assumes either the VMM or the hypervisor has full access the full guest address space. I've seen quite a lot of slides that want to isolate sections of device emulation to separate processes or even separate guest VMs. -- Alex Bennée _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization