Hello guys,
maybe you can help me with following issue. I have created a little cloud with a host and two worker nodes using opennebula. The setup went successfully until now, I am able to create VM's and move them via normal and live migration.
Another (possibly) important information is that I configured my virtual bridge on both worker nodes like this:
maybe you can help me with following issue. I have created a little cloud with a host and two worker nodes using opennebula. The setup went successfully until now, I am able to create VM's and move them via normal and live migration.
Another (possibly) important information is that I configured my virtual bridge on both worker nodes like this:
auto br0 iface br0 inet static address 192.168.0.[2|3] netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 #gateway 192.168.0.1 bridge_ports eth0 bridge_stp on bridge_maxwait 0
The command "brctl show" gives me following things back:
bridge name bridge id STP enabled interfaces
br0 8000.003005c34278 yes eth0
vnet0 (<- only appears on node with running VM)
virbr0 8000.000000000000 yes
According to the libvirt wiki this setting is good as is. However, the issue I'm having is that when I create a VM and assign a static IP to it, which looks like e.g. 192.168.0.5,
I firstly am able to ping this VM from both worker nodes, and also when I perform a live migration the ping stops for a few seconds (until the nodes realize the new route to this VM) and then
starts pinging normally again.
However, when I perform a normal migration the ping doesn't recover anymore, but answers repeatedly with: Destination Host Unreachable
Do you know what could be the problem? Where is the difference between a normal and live migration and how can the ping after live migrating still work, but after a normal migration not?
Thanks a lot!
Regards, Adnan