On 04/27/2011 09:58 AM, Oved Ourfalli wrote:
Laine, hello.
We read your proposal for abstraction of guest<--> host network
connection in libvirt.
You has an open issue there regarding the vepa/vnlink attributes:
"3) What about the parameters in the<virtualport> element that are
currently used by vepa/vnlink. Do those belong with the host, or with
the guest?"
The parameters for the virtualport element should be on the guest, and
not the host, because a specific interface can run multiple profiles,
Are you talking about host interface or guest interface? If you mean
that multiple different profiles can be used when connecting to a
particular switch - as long as there are only a few different profiles,
rather than each guest having its own unique profile, then it still
seems better to have the port profile live with the network definition
(and just define multiple networks, one for each port profile).
so it will be a mistake to define a profile to be interface specific
on the host. Moreover, putting it in the guest level will enable us in
the future (if supported by libvirt/qemu) to migrate a vm from a host
with vepa/vnlink interfaces, to another host with a bridge, for example.
It seems to me like doing exactly the opposite would make it easier to
migrate to a host that used a different kind of switching (from vepa to
vnlink, or from a bridged interface to vepa, etc), since the port
profile required for a particular host's network would be at the host
waiting to be used.
So, in the networks at the host level you will have:
<network type='direct'>
<name>red-network</name>
<source mode='vepa'>
<pool>
<interface>
<name>eth0</name>
.....
</interface>
<interface>
<name>eth4</name>
.....
</interface>
<interface>
<name>eth18</name>
.....
</interface>
</pool>
</source>
</network>
And in the guest you will have (for vepa):
<interface type='network'>
<source network='red-network'/>
<virtualport type="802.1Qbg">
<parameters managerid="11" typeid="1193047" typeidversion="2"
instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
</virtualport>
</interface>
Or (for vnlink):
<interface type='network'>
<source network='red-network'/>
<virtualport type="802.1Qbh">
<parameters profile_name="profile1"/>
</virtualport>
</interface>
This illustrates the problem I was wondering about - in your example it
would not be possible for the guest to migrate from the host using a
vepa switch to the host using a vnlink switch (and it would be possible
to migrate to a host using a standard bridge only if the virtualport
element was ignored). If the virtualport element lived with the network
definition of red-network on each host, it could be migrated without
problem.
The only problematic thing would be if any of the attributes within
<parameters> was unique for each guest (I don't know anything about the
individual attributes, but "instanceid" sounds like it might be
different for each guest).
Then, when migrating from a vepa/vnlink host to another vepa/vnlink
host containing red-network, the profile attributes will be available
at the guest domain xml.
In case the target host has a red-network, which isn't vepa/vnlink, we
want to be able to choose whether to make the use of the profile
attributes optional (i.e., libvirt won't fail in case of migrating to
a network of another type), or mandatory (i.e., libvirt will fail in
case of migration to a non-vepa/vnlink network).
We have something similar in CPU flags:
<cpu match="exact">
<model>qemu64</model>
<topology sockets="S" cores="C" threads="T"/>
<feature policy="require/optional/disable......" name="sse2"/>
</cpu>
In this analogy, does "CPU flags" == "mode (vepa/vnlink/bridge)" or does
"CPU flags" == "virtualport parameters" ? It seems like what you're
wanting can be satisfied by simply not defining "red-network" on the
hosts that don't have the proper networking setup available (maybe what
you *really* want to call it is "red-vnlink-network").
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list