On 04/12/2011 12:13 PM, Laine Stump wrote:
Abstraction of guest <--> host network connection in libvirt
=====================================
Here is a response that was posted in the tracking bug report
(https://bugzilla.redhat.com/show_bug.cgi?id=643947). I'm transferring
it to here and replying to keep the discussion in one place:
Martin Wilck writes:
> I notice that this RFE doesn't mention openvswitch at all. Wouldn't
that offer
> all you need, and even more, if there was a plugin for configuring
openvswitch
> in libvirt?
Perhaps having "vSwitch" in the title of this bug is confusing things...
openvswitch is an alternative to using a 1) linux host bridge, 2) a
libvirt virtual network (which is also a host bridge in disguise) or 3)
a direct connection to an interface. It is *not*, as far as I
understand, something that could/should be used together with any of
those, but *instead of* them.
There are libvirt users who are using macvtap for direct connection
between guests and the host network interface device 1) for performance
reasons, and 2) because libvirt's macvtap support also has support for
physical switches that have vepa and vnlink - in these modes all traffic
to/from a guest is mandated to travel through the physical switch, even
if it ends up hairpinning back to the same host. This allows the admin
of the physical switch to enforce rules about type of traffic, QoS, etc.
openvswitch would not be interesting in this scenario, because it adds
extra overhead on the host, and also allows bypassing the mandatory trip
to the physical switch.
The main purpose of the proposal here is to allow guests using this type
of direct connection to migrate successfully among hosts. A nice side
effect is that it will make it easier to add in support for things like
openvsitch (which, as I understand it, can just become another type of
libvirt <network>). So while openvswitch may benefit from this code (see
below), it is not a viable alternative to it.
Option 3
-----------
Up to now we've only discussed the need for separating the
host-specific config (<source> element) in the case of type='direct'
interfaces (well, in reality I've gone back and edited this document
so many times that is no longer true, but play along with me! :-). But
it really is a problem for all interface types - all of the
information currently in the guest's interface <source> element really
is tied to the host, and shouldn't be defined in detail in the guest
XML; it should instead be defined once for each host, and only
referenced by some name in the guest XML; that way as a guest moves
from host to host, it will automatically adjust its connection to
match the new environmant.
As a more general solution, instead of having the special new
"interfacePool" object in the config, what if the XML for "network was
expanded to mean "any type of guest network connection" (with a new
"type='xxx'" attribute at the toplevel to indicate which type), not
just "a private bridge optionally connected to the real world via
routing/NAT"?
If this was the case, the guest interface XML could always be, eg:
<interface type='network'>
<source network='red-network'/>
...
</interface>
and depending on the network config of the host the guest was migrated
to, this could be either a direct (macvtap) connection via an
interface allocated from a pool (the pool being defined in the
definition of 'red-network'), a bridge (again, pointed to by the
definition of 'red-network', or a virtual network (using the current
network definition syntax). This way the same guest could be migrated
not only between macvtap-enabled hosts, but from there to a host using
a bridge, or maybe a host in a remote location that used a virtual
network with a secure tunnel to connect back to the rest of the
red-network. (Part of the migration process would of course check that
the destination host had a network of the proper name, and fail if it
didn't; management software at a level above libvirt would probably
filter a list of candidate migration destinations based on available
networks, and only attempt migration to one that had the matching
network available).
Examples of 'red-network' for different types of connections (all of
these would work with the interface XML given above):
<!-- Existing usage - a libvirt virtual network -->
<network> <!-- (you could put "type='virtual'" here for symmetry) -->
<name>red-network</name>
<bridge name='virbr0'/>
<forward mode='route'/>
...
</network>
<!-- The simplest - an existing host bridge -->
<network type='bridge'>
<name>red-network</name>
<bridge name='br0'/>
</network>
<network type='direct'>
<name>red-network</name>
<source mode='vepa'>
<!-- define the pool of available interfaces here. Interfaces may have
-->
<!-- parameters associated with them, eg max number of simultaneous
guests -->
</source>
<!-- add any other elements from the guest interface XML that are tied
to -->
<!-- the host here (virtualport ?) (of course if they're host
specific, they -->
<!-- should have been in <source> in the first place!!) -->
</network>
In other words, to support openvswitch, we would add:
<network type='openvswitch'>
<name>red-network</name>
<!-- whatever XML is necessary to configure the openvswitch -->
</network>
Then at guest boot startup time, libvirt would do all the setup to
connect the qemu process' network interface to the openvswitch.
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list