Hi there, I am running Ubuntu 9.04 X64 server with KVM/libvirt. I recently found that the "virsh destroy" command actually delete VMs it should not delete. For example, please see the following screen copy: root@node1:/srv/cloud/ImgRep/WinXP# virsh list Connecting to uri: qemu:///system Id Name State ---------------------------------- 27 one-80 running 28 one-81 running 29 one-83 running 30 one-82 running 31 one-79 running root@node1:/srv/cloud/ImgRep/WinXP# virsh destroy 29 Connecting to uri: qemu:///system Domain 29 destroyed root@node1:/srv/cloud/ImgRep/WinXP# virsh list Connecting to uri: qemu:///system Id Name State ---------------------------------- 27 one-80 running 28 one-81 running 31 one-79 running Please note that VM 30 was deleted when we tried to delete VM 29. The /var/log/messages showed the following information: ep 30 08:58:59 node1 kernel: [238101.304619] br1: port 5(vnet4) entering disabled state Sep 30 08:58:59 node1 kernel: [238101.342851] device vnet4 left promiscuous mode Sep 30 08:58:59 node1 kernel: [238101.342854] br1: port 5(vnet4) entering disabled state Sep 30 08:58:59 node1 kernel: [238101.424851] br1: port 4(vnet3) entering disabled state Sep 30 08:59:00 node1 kernel: [238101.463031] device vnet3 left promiscuous mode Sep 30 08:59:00 node1 kernel: [238101.463035] br1: port 4(vnet3) entering disabled state Sep 30 09:10:53 node1 -- MARK -- Sep 30 09:30:53 node1 -- MARK -- Sep 30 09:50:53 node1 -- MARK -- Sep 30 09:51:31 node1 kernel: [241253.266955] device vnet3 entered promiscuous mode Sep 30 09:51:31 node1 kernel: [241253.268857] br1: port 4(vnet3) entering learning state Sep 30 09:51:40 node1 kernel: [241262.260147] br1: topology change detected, propagating Sep 30 09:51:40 node1 kernel: [241262.260152] br1: port 4(vnet3) entering forwarding state Sep 30 09:53:59 node1 kernel: [241401.204570] br1: port 4(vnet3) entering disabled state Sep 30 09:53:59 node1 kernel: [241401.232395] device vnet3 left promiscuous mode Sep 30 09:53:59 node1 kernel: [241401.232399] br1: port 4(vnet3) entering disabled state Sep 30 09:54:20 node1 kernel: [241421.694803] br1: port 2(vnet0) entering disabled state Sep 30 09:54:20 node1 kernel: [241421.733095] device vnet0 left promiscuous mode Sep 30 09:54:20 node1 kernel: [241421.733099] br1: port 2(vnet0) entering disabled state Sep 30 09:54:20 node1 kernel: [241421.815209] br2: port 2(vnet1) entering disabled state Sep 30 09:54:20 node1 kernel: [241421.853728] device vnet1 left promiscuous mode Sep 30 09:54:20 node1 kernel: [241421.853732] br2: port 2(vnet1) entering disabled state Sep 30 09:54:20 node1 kernel: [241421.935254] br1: port 3(vnet2) entering disabled state Sep 30 09:54:20 node1 kernel: [241421.963527] device vnet2 left promiscuous mode Sep 30 09:54:20 node1 kernel: [241421.963531] br1: port 3(vnet2) entering disabled state Sep 30 09:55:29 node1 kernel: [241490.644204] br1: port 6(vnet5) entering disabled state Sep 30 09:55:29 node1 kernel: [241490.673195] device vnet5 left promiscuous mode Sep 30 09:55:29 node1 kernel: [241490.673199] br1: port 6(vnet5) entering disabled state Sep 30 10:10:53 node1 -- MARK -- Sep 30 10:11:49 node1 kernel: [242470.955142] device vnet0 entered promiscuous mode Sep 30 10:11:49 node1 kernel: [242470.957051] br1: port 2(vnet0) entering learning state Sep 30 10:11:52 node1 kernel: [242474.180036] device vnet1 entered promiscuous mode Sep 30 10:11:52 node1 kernel: [242474.181943] br1: port 3(vnet1) entering learning state Sep 30 10:11:55 node1 kernel: [242477.431693] device vnet2 entered promiscuous mode Sep 30 10:11:55 node1 kernel: [242477.433558] br1: port 4(vnet2) entering learning state Sep 30 10:11:58 node1 kernel: [242479.950082] br1: topology change detected, propagating Sep 30 10:11:58 node1 kernel: [242479.950086] br1: port 2(vnet0) entering forwarding state Sep 30 10:12:00 node1 kernel: [242482.059146] device vnet3 entered promiscuous mode Sep 30 10:12:00 node1 kernel: [242482.060330] br1: port 5(vnet3) entering learning state Sep 30 10:12:01 node1 kernel: [242483.180170] br1: topology change detected, propagating Sep 30 10:12:01 node1 kernel: [242483.180173] br1: port 3(vnet1) entering forwarding state Sep 30 10:12:04 node1 kernel: [242485.710948] device vnet4 entered promiscuous mode Sep 30 10:12:04 node1 kernel: [242485.712109] br1: port 6(vnet4) entering learning state Sep 30 10:12:04 node1 kernel: [242486.430098] br1: topology change detected, propagating Sep 30 10:12:04 node1 kernel: [242486.430104] br1: port 4(vnet2) entering forwarding state Sep 30 10:12:09 node1 kernel: [242491.051857] br1: topology change detected, propagating Sep 30 10:12:09 node1 kernel: [242491.051862] br1: port 5(vnet3) entering forwarding state Sep 30 10:12:13 node1 kernel: [242494.711792] br1: topology change detected, propagating Sep 30 10:12:13 node1 kernel: [242494.711797] br1: port 6(vnet4) entering forwarding state Sep 30 10:22:00 node1 kernel: [243082.274008] br1: port 4(vnet2) entering disabled state Sep 30 10:22:00 node1 kernel: [243082.312108] device vnet2 left promiscuous mode Sep 30 10:22:00 node1 kernel: [243082.312112] br1: port 4(vnet2) entering disabled state Sep 30 10:22:00 node1 kernel: [243082.365331] br1: port 5(vnet3) entering disabled state Sep 30 10:22:00 node1 kernel: [243082.393430] device vnet3 left promiscuous mode Sep 30 10:22:00 node1 kernel: [243082.393433] br1: port 5(vnet3) entering disabled state I have 4 NICs and the ifconfig shows root@node1:/srv/cloud/ImgRep/WinXP# ifconfig bond0 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:57 inet6 addr: fe80::224:e8ff:fe6b:d57/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:7799931 errors:0 dropped:0 overruns:0 frame:0 TX packets:6879357 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4360166425 (4.3 GB) TX bytes:4350243659 (4.3 GB) br0 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:57 inet addr:192.168.1.128 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::224:e8ff:fe6b:d57/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7641913 errors:0 dropped:0 overruns:0 frame:0 TX packets:6412260 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4169876739 (4.1 GB) TX bytes:4291182908 (4.2 GB) br1 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:5b inet6 addr: fe80::224:e8ff:fe6b:d5b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:152606 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11276026 (11.2 MB) TX bytes:468 (468.0 B) br2 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:5d inet6 addr: fe80::224:e8ff:fe6b:d5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:152596 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11286132 (11.2 MB) TX bytes:468 (468.0 B) eth0 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:57 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:3622205 errors:0 dropped:0 overruns:0 frame:0 TX packets:3443103 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1758147112 (1.7 GB) TX bytes:2174389369 (2.1 GB) Interrupt:36 Memory:d6000000-d6012700 eth1 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:57 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:4177726 errors:0 dropped:0 overruns:0 frame:0 TX packets:3436254 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2602019313 (2.6 GB) TX bytes:2175854290 (2.1 GB) Interrupt:48 Memory:d8000000-d8012700 eth2 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:5b inet6 addr: fe80::224:e8ff:fe6b:d5b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:256014 errors:0 dropped:0 overruns:0 frame:0 TX packets:91549 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20680139 (20.6 MB) TX bytes:17317087 (17.3 MB) Interrupt:32 Memory:da000000-da012700 eth3 Link encap:Ethernet HWaddr 00:24:e8:6b:0d:5d inet6 addr: fe80::224:e8ff:fe6b:d5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:245189 errors:0 dropped:0 overruns:0 frame:0 TX packets:32669 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20866257 (20.8 MB) TX bytes:8265329 (8.2 MB) Interrupt:42 Memory:dc000000-dc012700 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:324787 errors:0 dropped:0 overruns:0 frame:0 TX packets:324787 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:571707504 (571.7 MB) TX bytes:571707504 (571.7 MB) vnet0 Link encap:Ethernet HWaddr 52:c3:a3:d0:09:84 inet6 addr: fe80::50c3:a3ff:fed0:984/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72 errors:0 dropped:0 overruns:0 frame:0 TX packets:533 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:10410 (10.4 KB) TX bytes:49306 (49.3 KB) vnet1 Link encap:Ethernet HWaddr 02:0d:4c:1e:5c:82 inet6 addr: fe80::d:4cff:fe1e:5c82/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36 errors:0 dropped:0 overruns:0 frame:0 TX packets:563 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:6023 (6.0 KB) TX bytes:53245 (53.2 KB) vnet4 Link encap:Ethernet HWaddr a6:58:67:2b:37:8a inet6 addr: fe80::a458:67ff:fe2b:378a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:42 errors:0 dropped:0 overruns:0 frame:0 TX packets:552 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:6733 (6.7 KB) TX bytes:52189 (52.1 KB) Note that there is no IP address assigned to the br1 device which is used here. I don't think that should be a problem. Also the first 2 NICs are bonded first and then created as a bridge br0 and I don't think it is a problem either. root@node1:/srv/cloud/ImgRep/WinXP# virsh --version Connecting to uri: qemu:///system 0.6.1 root@node1:/srv/cloud/ImgRep/WinXP# kvm -h QEMU PC emulator version 0.9.1 (kvm-84), Copyright (c) 2003-2008 Fabrice Bellard Has anyone had similar problem before? How can I fix it? Thanks a lot. Shi -- Shi Jin, PhD -- Libvir-list mailing list Libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list