Possible to reach more than 1Gbit to VM?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hello,

I've been playing around with KVM since few years.
But now I'm wondering is it possible to mix bonding+bridging together to reach more than single Gigabit link between Client and VM? Looking over net, everyone says to use LACP.. but I did it already and it worked, but still at 1 NIC speed.

This is my working setup on Debian Squeeze 64bit:

*cat /proc/net/bonding/bond0*
/Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:55

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 55:44:33:22:11:00/

*cat /etc/network/interfaces*
/auto lo
iface lo inet loopback

# The bonded network interface for LAN
auto bond0
iface bond0 inet manual
    bond-slaves none
    bond-mode   balance-rr
    bond-miimon 100
    #bond_lacp_rate fast
    #bond_ad_select 0
    arp_interval 80
    up /sbin/ifenslave bond0 eth1 eth2
    down /sbin/ifenslave bond0 -d eth1 eth2

#Onboard NIC #1 Nvidia Gigabit
auto eth1
iface eth1 inet manual
    bond-master bond0

#NIC #2 Intel PRO/1000 F Server Adapter - FIBER
auto eth2
iface eth2 inet manual
    bond-master bond0

# Bridge to LAN for virtual network KVM
auto br0
iface br0 inet static
    address 10.0.0.250
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.249
    dns-nameservers 10.0.0.249 8.8.8.8
    bridge-ports  bond0
    bridge-fd     9
    bridge-hello  2
    bridge-maxage 12
    bridge-stp    off

#NIC #3 - modem
auto eth0
iface eth0 inet manual

#Bridge LAN to virtual network KVM - modem
iface br1 inet manual
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        metric 1
auto br1/

*cat /etc/modprobe.d/bonding.conf*
/alias bond0 bonding
options bonding mode=balance-rr miimon=100 downdelay=200 updelay=200 arp_interval=80/

I've tried following already (single switch, not multiple):
- LACP in Debian + LACP on the switch
- static bond0 (round-robin) + static link aggregation on the switch for both Client and Hypervisor
- tried several switches (HP V1910, 3Com 3824 and Planet GSD-802S)
- tried several NICs, including Intel PRO/1000 F and MF fiber adapters
- for example I can reach ~1,9Gbit between two non-virtualised servers using 3Com 3824 and NO link aggregation configured on the switch - I already reached almost native (940Mbit/s) from Client to VM using virtio-net and Debian.
- tests using iperf, iSCSI, NFS. To avoid I/O limits - using ramdisks.

Questions:.
- is it even possible?
- maybe I have to create MORE bridge interfaces, one per NIC and set up aggregated link inside VM then?
- can bridge interface limit bandwidth to 1Gbit?

Regards,
Tom

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux