Re: VIRTIO adoption in other hypervisors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28.02.20 17:47, Alex Bennée wrote:

Jan Kiszka <jan.kiszka@xxxxxxxxxxx> writes:

On 28.02.20 11:30, Jan Kiszka wrote:
On 28.02.20 11:16, Alex Bennée wrote:
Hi,

<snip>
I believe there has been some development work for supporting VIRTIO on
Xen although it seems to have stalled according to:

    https://wiki.xenproject.org/wiki/Virtio_On_Xen

Recently at KVM Forum there was Jan's talk about Inter-VM shared memory
which proposed ivshmemv2 as a VIRTIO transport:

    https://events19.linuxfoundation.org/events/kvm-forum-2019/program/schedule/


As I understood it this would allow Xen (and other hypervisors) a simple
way to be able to carry virtio traffic between guest and end point.

And to clarify the scope of this effort: virtio-over-ivshmem is not
the fastest option to offer virtio to a guest (static "DMA" window),
but it is the simplest one from the hypervisor PoV and, thus, also
likely the easiest one to argue over when it comes to security and
safety.

So to drill down on this is this a particular problem with type-1
hypervisors?

It seems to me any KVM-like run loop trivially supports a range of
virtio devices by virtue of trapping accesses to the signalling area of
a virtqueue and allowing the VMM to handle the transaction which ever
way it sees fit.

I've not quite understood the way Xen interfaces to QEMU aside from it's
different to everything else. More over it seems the type-1 hypervisors
are more interested in providing better isolation between segments of a
system whereas VIRTIO currently assumes either the VMM or the hypervisor
has full access the full guest address space. I've seen quite a lot of
slides that want to isolate sections of device emulation to separate
processes or even separate guest VMs.

In Xen device emulation is done by other VMs. Normally the devices are
emulated via dom0, but it is possible to have other driver domains, too
(those need to get passed through the related PCI devices, of course).

PV device backends get access only to the guest pages the PV frontends
allow. This is done via so called "grants", which are per guest. So a
frontend can grant another Xen VM access to dedicated pages. The backend
is using the grants to map those pages via the hypervisor in order to
perform the I/O. After finishing the I/O the I/O-pages are unmapped by
the backend again.

For legacy device emulation via qemu the guest running qemu needs to get
access to all the guests memory, as the guest won't grant any pages to
the emulating VM. It is possible to let qemu run in a small stub guest
using PV devices in order to isolate the legacy guest from e.g. dom0.


Hope that makes it clearer,


Juergen

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux