Re: LXC: autostart feature does set all interfaces to state up.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 10.07.2013 03:20, schrieb Gao feng:
> On 07/09/2013 09:11 PM, Richard Weinberger wrote:
>> Am 08.07.2013 05:54, schrieb Gao feng:
>>> On 07/05/2013 06:22 PM, Richard Weinberger wrote:
>>>> Am 05.07.2013 03:36, schrieb Gao feng:
>>>>> On 07/05/2013 04:45 AM, Richard Weinberger wrote:
>>>>>> Hi,
>>>>>>
>>>>>> Am 03.07.2013 12:04, schrieb Gao feng:
>>>>>>> Hi,
>>>>>>> On 07/01/2013 03:45 PM, Richard Weinberger wrote:
>>>>>>>> Hi!
>>>>>>>>
>>>>>>>> If you have multiple LXC containers with networking and the autostart feature enabled libvirtd fails to
>>>>>>>> up some veth interfaces on the host side.
>>>>>>>>
>>>>>>>> Most of the time only the first veth device is in state up, all others are down.
>>>>>>>>
>>>>>>>> Reproducing is easy.
>>>>>>>> 1. Define a few containers (5 in my case)
>>>>>>>> 2. Run "virsh autostart ..." on each one.
>>>>>>>> 3. stop/start libvirtd
>>>>>>>>
>>>>>>>> You'll observe that all containers are running, but "ip a" will report on the host
>>>>>>>> side that not all veth devices are up and are not usable within the containers.
>>>>>>>>
>>>>>>>> This is not userns related, just retested with libvirt of today.
>>>>>>>
>>>>>>> I can not reproduce this problem on my test bed...
>>>>>>
>>>>>> Strange.
>>>>>>
>>>>>>> maybe you should wait some seconds for the starting of these containers.
>>>>>>
>>>>>> Please see the attached shell script. Using it I'm able to trigger the issue on all of
>>>>>> my test machines.
>>>>>> run.sh creates six very minimal containers and enables autostart. Then it kills and restarts libvirtd.
>>>>>> After the script is done you'll see that only one or two veth devices are up.
>>>>>>
>>>>>> On the over hand, if I start them manually using a command like this one:
>>>>>> for cfg in a b c d e f ; do /opt/libvirt/bin/virsh -c lxc:/// start test-$cfg ; done
>>>>>> All veths are always up.
>>>>>>
>>>>>
>>>>>
>>>>> I still can not reproduce even use your script.
>>>>>
>>>>> [root@Donkey-I5 Desktop]# ./run.sh
>>>>> Domain test-a defined from container_a.conf
>>>>>
>>>>> Domain test-a marked as autostarted
>>>>>
>>>>> Domain test-b defined from container_b.conf
>>>>>
>>>>> Domain test-b marked as autostarted
>>>>>
>>>>> Domain test-c defined from container_c.conf
>>>>>
>>>>> Domain test-c marked as autostarted
>>>>>
>>>>> Domain test-d defined from container_d.conf
>>>>>
>>>>> Domain test-d marked as autostarted
>>>>>
>>>>> Domain test-e defined from container_e.conf
>>>>>
>>>>> Domain test-e marked as autostarted
>>>>>
>>>>> Domain test-f defined from container_f.conf
>>>>>
>>>>> Domain test-f marked as autostarted
>>>>>
>>>>> 2013-07-05 01:26:47.155+0000: 27163: info : libvirt version: 1.1.0
>>>>> 2013-07-05 01:26:47.155+0000: 27163: debug : virLogParseOutputs:1334 : outputs=1:file:/home/gaofeng/libvirtd.log
>>>>> waiting a bit....
>>>>> 167: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>>> 169: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>>> 171: veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>>> 173: veth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>>> 175: veth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>>> 177: veth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>>>
>>>>>
>>>>> Can you post your libvirt debug log?
>>>>
>>>> Please see attached file.
>>>>
>>>> 43: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>> 45: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>> 47: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
>>>> 49: veth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>> 51: veth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000
>>>> 53: veth5: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 100
>>>>
>>>
>>> strange, I can not see veth related error message from your log.
>>>
>>> seems like all of the veth devices of host had been up but for some reasons they become down.
>>
>> I think libvirt has to do "ip link set dev vethX up".
>> Otherwise the device state is undefined.
>>
> 
> Yes,actually libvirt did up the veth devices, that's why only veth2& veth5 are down.

Where does libvirt up the devices? The debug log does not contain any "ip link set dev XXX up" commands.
Also in src/util/virnetdevveth.c I'm unable to find such a ip command.

> I need to know why these two devices are down, I believe they were up, your bridge and default-net
> looks good. So please show me your kernel message (dmesg), maybe it can give us some useful information.

This time veth4 and 5 are down.

---cut---
[    4.167119] systemd[1]: Starting File System Check on Root Device...
[    5.150740] mount (725) used greatest stack depth: 4616 bytes left
[    5.397652] systemd-udevd[755]: starting version 195
[    6.155648] EXT4-fs (vda2): re-mounted. Opts: acl,user_xattr
[    6.369755] systemd-journald[730]: Received SIGUSR1
[    6.877588] Adding 690172k swap on /dev/vda1.  Priority:-1 extents:1 across:690172k
[    8.205295] ip (1446) used greatest stack depth: 4584 bytes left
[    9.397258] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[    9.397275] 8021q: adding VLAN 0 to HW filter on device eth0
[    9.397733] ip (1690) used greatest stack depth: 4456 bytes left
[   11.399834] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[   11.401711] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   41.179748] IPv6: ADDRCONF(NETDEV_UP): virbr0: link is not ready
[   42.126303] cgroup: libvirtd (2736) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.
[   42.126307] cgroup: "memory" requires setting use_hierarchy to 1 on the root.
[   42.126599] cgroup: libvirtd (2736) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.
[   42.145511] ip (2843) used greatest stack depth: 4136 bytes left
[   42.149231] device veth0 entered promiscuous mode
[   42.152167] IPv6: ADDRCONF(NETDEV_UP): veth0: link is not ready
[   42.559264] IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready
[   42.559336] virbr0: topology change detected, propagating
[   42.559343] virbr0: port 1(veth0) entered forwarding state
[   42.559359] virbr0: port 1(veth0) entered forwarding state
[   42.559398] IPv6: ADDRCONF(NETDEV_CHANGE): virbr0: link becomes ready
[   42.702130] ip (3070) used greatest stack depth: 4088 bytes left
[   42.708164] device veth1 entered promiscuous mode
[   42.712190] IPv6: ADDRCONF(NETDEV_UP): veth1: link is not ready
[   42.712197] virbr0: topology change detected, propagating
[   42.712202] virbr0: port 2(veth1) entered forwarding state
[   42.712212] virbr0: port 2(veth1) entered forwarding state
[   43.147155] virbr0: port 2(veth1) entered disabled state
[   43.225250] IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready
[   43.225322] virbr0: topology change detected, propagating
[   43.225328] virbr0: port 2(veth1) entered forwarding state
[   43.225358] virbr0: port 2(veth1) entered forwarding state
[   43.325556] ip (3293) used greatest stack depth: 4064 bytes left
[   43.330133] device veth2 entered promiscuous mode
[   43.334201] IPv6: ADDRCONF(NETDEV_UP): veth2: link is not ready
[   43.334208] virbr0: topology change detected, propagating
[   43.334214] virbr0: port 3(veth2) entered forwarding state
[   43.334224] virbr0: port 3(veth2) entered forwarding state
[   43.613184] IPv6: ADDRCONF(NETDEV_CHANGE): veth2: link becomes ready
[   43.767050] ip (3521) used greatest stack depth: 3992 bytes left
[   43.773832] device veth3 entered promiscuous mode
[   43.778169] IPv6: ADDRCONF(NETDEV_UP): veth3: link is not ready
[   43.778177] virbr0: topology change detected, propagating
[   43.778182] virbr0: port 4(veth3) entered forwarding state
[   43.778193] virbr0: port 4(veth3) entered forwarding state
[   44.076299] IPv6: ADDRCONF(NETDEV_CHANGE): veth3: link becomes ready
[   44.153214] device veth4 entered promiscuous mode
[   44.158209] IPv6: ADDRCONF(NETDEV_UP): veth4: link is not ready
[   44.473317] IPv6: ADDRCONF(NETDEV_CHANGE): veth4: link becomes ready
[   44.473400] virbr0: topology change detected, propagating
[   44.473407] virbr0: port 5(veth4) entered forwarding state
[   44.473423] virbr0: port 5(veth4) entered forwarding state
[   44.566186] device veth5 entered promiscuous mode
[   44.571234] IPv6: ADDRCONF(NETDEV_UP): veth5: link is not ready
[   44.571243] virbr0: topology change detected, propagating
[   44.571250] virbr0: port 6(veth5) entered forwarding state
[   44.571261] virbr0: port 6(veth5) entered forwarding state
[   44.902308] IPv6: ADDRCONF(NETDEV_CHANGE): veth5: link becomes ready
[   45.000580] virbr0: port 5(veth4) entered disabled state
[   45.348548] virbr0: port 6(veth5) entered disabled state
[   45.348837] ip (4349) used greatest stack depth: 3944 bytes left
---cut---

Thanks,
//richard

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]