virtio bonding bandwidth problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(initially posted to libvirt-users@xxxxxxxxxx, but by request of Daniel
P. Berrange cross-posted to this list)


Dear all,


I have been wrestling with this issue for the past few days ; googling
around doesn't seem to yield anything useful, hence this cry for help.



Setup (RHEL5.4) :

* kernel-2.6.18-164.10.1.el5
* kvm-83-105.el5
* libvirt-0.6.3-20.el5
* net.bridge.bridge-nf-call-{arp,ip,ip6}tables = 0
* tested with/without jumbo frames


- I am running several RHEL5.4 KVM virtio guest instances on a Dell PE
R805 RHEL5.4 host. Host and guests are fully updated ; I am using iperf
to test available bandwidth from 3 different locations (clients) in the
network to both the host and the guests .

- To increase both bandwidth and fail-over, 3 1Gb network interfaces
(BCM5708, bnx2 driver) on the host are bonded (802.3ad) to a 3 Gb/s
bond0, which is bridged. As all guest interfaces are connected to the
bridge, I would expect total available bandwidth to all guests to be in
the range of 2-2.5 Gb/s.

- Testing with one external client connection to the bare metal host
yields approx. 940 Mb/s ;

- Testing with 3 simultaneous connections to the host yields 2.5 Gb/s,
which confirms a successful bonding setup.


Problem :

Unfortunately, available bandwidth to the guests proves to be problematic :

1. One client to one guest : 250-600 Mb/s ;
2a. One client to 3 guests : 300-350 Mb/s to each guest, total not
exceeding 980 Mb/s;
2b. Three clients to 3 guests : 300-350 Mb/s to each guest ;
2c. Three clients to host and 2 guests : 940 Mb/s (host) + 500 Mb/s to
each guest.


Conclusions :

1. I am experiencing a 40% performance hit (600 Mb/s) on each individual
virtio guest connection ;
2. Total simultaneous bandwidth to all guests seems to be capped at 1
Gb/s ; quite problematic, as this renders my server consolidation almost
useless.


I could bridge each host network interface separately and assign guest
interfaces by hand, but that would defy the whole idea of load balancing
and failover which is provided by the host bonding.



Any ideas anyone, or am I peeking in the wrong direction (clueless
setup, flawed testing methodology, ...) ?



I monitor the list; a CC: would be appreciated.

Thanks in advance for any help,
Didier
-- 
Didier Moens , IT services
Department for Molecular Biomedical Research (DMBR)
VIB - Ghent University


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux