> Hi, > > I seem to lack a vnet to bridge device. > > When I go to change my interface on the VM using the GUI, I do not see > an option for "Host device vnet# (Bridge 'br6') > > Instead I see "host device eth6 (Bridge 'br6') So before creating one via; > > brctl addif... > > Let me explain my config; > > eth0 - standard MTU > eth1 - disabled > *eth6 - 10Gb at jumbo > * This card was added after KVM was setup and running. > > My ifconfig output; > > > br0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A > inet addr:10.0.10.218 Bcast:10.0.255.255 Mask:255.255.0.0 > inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:754670 errors:0 dropped:0 overruns:0 frame:0 > TX packets:43162 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:160904242 (153.4 MiB) TX bytes:51752758 (49.3 MiB) > > br6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29 > inet addr:10.0.10.220 Bcast:10.0.255.255 Mask:255.255.0.0 > inet6 addr: fe80::205:33ff:fe48:7b29/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 > RX packets:4130 errors:0 dropped:0 overruns:0 frame:0 > TX packets:11150 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:131498 (128.4 KiB) TX bytes:513156 (501.1 KiB) > > eth0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A > inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:3379929 errors:18 dropped:0 overruns:0 frame:18 > TX packets:3565007 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:840911383 (801.9 MiB) TX bytes:3519831013 (3.2 GiB) > Memory:fbbe0000-fbc00000 > > eth6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29 > UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) > Memory:fbd40000-fbd7ffff > > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:16436 Metric:1 > RX packets:185130 errors:0 dropped:0 overruns:0 frame:0 > TX packets:185130 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:138905226 (132.4 MiB) TX bytes:138905226 (132.4 MiB) > > virbr0 Link encap:Ethernet HWaddr 52:54:00:CE:7A:65 > inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:11139 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 b) TX bytes:512410 (500.4 KiB) > > vnet0 Link encap:Ethernet HWaddr FE:30:48:7E:65:72 > inet6 addr: fe80::fc30:48ff:fe7e:6572/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 > RX packets:1045 errors:0 dropped:0 overruns:0 frame:0 > TX packets:697730 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:500 > RX bytes:119723 (116.9 KiB) TX bytes:175334262 (167.2 MiB) > > vnet1 Link encap:Ethernet HWaddr FE:16:36:0E:E7:F4 > inet6 addr: fe80::fc16:36ff:fe0e:e7f4/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 > RX packets:3494450 errors:0 dropped:0 overruns:0 frame:0 > TX packets:3243369 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:500 > RX bytes:3466241191 (3.2 GiB) TX bytes:822212316 (784.1 MiB) > > brctl show; > > bridge namebridge idSTP enabledinterfaces > br08000.002590639f7anoeth0 > vnet0 > vnet1 > br68000.000533487b29noeth6 > virbr08000.525400ce7a65yesvirbr0-nic > > > Should I have a virbr6? > > I'm obviously pretty lost and must admin I sorta hate bridging in KVM. > > - aurf I'm not sure what you are asking. You should not see the vnetX devices from the VM (or even the VM's definition file). They're created as needed to link the VM's interface to the bridge. Think of them as simple network cables. Some of the formatting isn't showing well on my mail client (text only), so I am having a little trouble parsing some of the data... If the VMs are using br6, then you see that it's already at 9000, so you should be able to use 9000 from inside the VM as well. Trick is, the vnetX devices are connected to the br0 bridge instead, which is set to 1500 because eth0 is still 1500. So at this point, the VMs are traversing br0, not br6. As for 'virbr0', that is libvirtd's default NAT'ed bridge. I don't recommend using those. I usually destroy them, personally. So to fix your problem, you need to tell the VMs to use br6. If you want to use jumbo frames on br0, you need to increase the MTU of eth0. Remember that the bridge will use the MTU of the lowest connected device. -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos-virt