Single connection can be at double speed, checked using iperf, nuttp.
So bonding interfaces in VM without bonding interfaces in Hypervisor
will not work too?
Round-robin policy provides load-balancing and failover, all NICs work
together I can see this from statistics:
LAB SERVER
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1f:1f:fa:3f:a9
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:1d:66:b7:9a
ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:1f:1f:fa:3f:a9
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:62560993 errors:0 dropped:0 overruns:0 frame:0
TX packets:34620931 errors:0 dropped:92 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:64688731563 (64.6 GB) TX bytes:15820286443 (15.8 GB)
ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:1f:1f:fa:3f:a9
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:48828189 errors:0 dropped:0 overruns:0 frame:0
TX packets:17310186 errors:0 dropped:92 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:49519993668 (49.5 GB) TX bytes:7910144215 (7.9 GB)
Interrupt:44 Base address:0x4000
ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:1f:1f:fa:3f:a9
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:13733216 errors:0 dropped:0 overruns:0 frame:0
TX packets:17310956 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15169147719 (15.1 GB) TX bytes:7910155539 (7.9 GB)
Interrupt:43 Base address:0xa000
HOME SERVER
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:25:22:8a:7a:ef
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:03:47:b1:e3:41
/sbin/ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:25:22:8a:7a:ef
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:5401406 errors:0 dropped:0 overruns:0 frame:0
TX packets:8713650 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:404938085 (386.1 MiB) TX bytes:12497904912 (11.6 GiB)
/sbin/ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:25:22:8a:7a:ef
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:347896 errors:0 dropped:0 overruns:0 frame:0
TX packets:4356784 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:21551680 (20.5 MiB) TX bytes:6257879091 (5.8 GiB)
Interrupt:27 Base address:0x8000
/sbin/ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:25:22:8a:7a:ef
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:5053513 errors:0 dropped:0 overruns:0 frame:0
TX packets:4356866 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:383386585 (365.6 MiB) TX bytes:6240025821 (5.8 GiB)
There is also package called "balance":
/Description: Load balancing solution and generic tcp proxy
Balance is a load balancing solution being a simple but powerful
generic tcp
proxy with round robin load balancing and failover mechanisms. Its
behaviour
can be controlled at runtime using a simple command line syntax./
Regards,
Tom
On 20.07.2011 17:25, Freddie Cash wrote:
No matter which bonding method you use, traffic between 1 client and
the VM will go across one interface, thus limiting the traffic to 1 Gbps.
All bonding does is allow you to have multiple 1 Gbps connections
between multiple clients and the VM. Each connection is limited to 1
Gbps, but you can have multiple connections going at once (each
connection goes across a separate interface, managed by the bonding
protocol).
If you need more than 1 Gbps of throughput for a single connection,
then you need a 10 Gbps (or faster) link. AFAIK, there's no support
for 10 Gbps interfaces in KVM.
On Wed, Jul 20, 2011 at 8:00 AM, TooMeeK <toomeek_85@xxxxx
<mailto:toomeek_85@xxxxx>> wrote:
Hello,
I've been playing around with KVM since few years.
But now I'm wondering is it possible to mix bonding+bridging
together to reach more than single Gigabit link between Client and VM?
Looking over net, everyone says to use LACP.. but I did it already
and it worked, but still at 1 NIC speed.
This is my working setup on Debian Squeeze 64bit:
*cat /proc/net/bonding/bond0*
/Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:55
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 55:44:33:22:11:00/
*cat /etc/network/interfaces*
/auto lo
iface lo inet loopback
# The bonded network interface for LAN
auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode balance-rr
bond-miimon 100
#bond_lacp_rate fast
#bond_ad_select 0
arp_interval 80
up /sbin/ifenslave bond0 eth1 eth2
down /sbin/ifenslave bond0 -d eth1 eth2
#Onboard NIC #1 Nvidia Gigabit
auto eth1
iface eth1 inet manual
bond-master bond0
#NIC #2 Intel PRO/1000 F Server Adapter - FIBER
auto eth2
iface eth2 inet manual
bond-master bond0
# Bridge to LAN for virtual network KVM
auto br0
iface br0 inet static
address 10.0.0.250
netmask 255.255.255.0
network 10.0.0.0
broadcast 10.0.0.255
gateway 10.0.0.249
dns-nameservers 10.0.0.249 8.8.8.8
bridge-ports bond0
bridge-fd 9
bridge-hello 2
bridge-maxage 12
bridge-stp off
#NIC #3 - modem
auto eth0
iface eth0 inet manual
#Bridge LAN to virtual network KVM - modem
iface br1 inet manual
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
metric 1
auto br1/
*cat /etc/modprobe.d/bonding.conf*
/alias bond0 bonding
options bonding mode=balance-rr miimon=100 downdelay=200
updelay=200 arp_interval=80/
I've tried following already (single switch, not multiple):
- LACP in Debian + LACP on the switch
- static bond0 (round-robin) + static link aggregation on the
switch for both Client and Hypervisor
- tried several switches (HP V1910, 3Com 3824 and Planet GSD-802S)
- tried several NICs, including Intel PRO/1000 F and MF fiber adapters
- for example I can reach ~1,9Gbit between two non-virtualised
servers using 3Com 3824 and NO link aggregation configured on the
switch
- I already reached almost native (940Mbit/s) from Client to VM
using virtio-net and Debian.
- tests using iperf, iSCSI, NFS. To avoid I/O limits - using ramdisks.
Questions:.
- is it even possible?
- maybe I have to create MORE bridge interfaces, one per NIC and
set up aggregated link inside VM then?
- can bridge interface limit bandwidth to 1Gbit?
Regards,
Tom
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
<mailto:majordomo@xxxxxxxxxxxxxxx>
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Freddie Cash
fjwcash@xxxxxxxxx <mailto:fjwcash@xxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html