Re: bridge + KVM performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/06/2009 12:34 PM, Martin Petermann wrote:
I'm currently looking at the network performance between two KVM guests
running on the same host. The host system is applied with two quad core
Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests,
enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5)
on all the three systems:

  ____________________     ____________________
|                    |   |                    |
|     KVM guest      |   |      KVM guest     |
|    ic01vn08man     |   |     ic01vn09man    |
|____________________|   |____________________|
              \                  /
               \                /
                \              /
                 \            /
                  \          /
               ____\________/______
              |                    |
              |     KVM host       |
              |ethernet bridge: br3|
              |____________________|


On the host I've created a network bridge in the following way

[root@ic01in01man ~]# cat /etc/sysconfig/network-scripts/ifcfg-br3
DEVICE=br3
TYPE=Bridge
ONBOOT=yes

and installed the bridge with the commands

brctl addbr br3
ifconfig br3 up

Within the configuration files of the KVM guests I added the following
sections:

ic01vn08man.xml;
...
     <interface type='bridge'>
       <source bridge='br3'/>
       <model type='virtio' />
       <mac address="00:ad:be:ef:99:08"/>
     </interface>
...

ic01vn09man.xml
...
     <interface type='bridge'>
       <source bridge='br3'/>
       <model type='virtio' />
       <mac address="00:ad:be:ef:99:09"/>
     </interface>
...

Within the guests I have configured the network in the following way:

[root@ic01vn08man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
# Virtio Network Device
DEVICE=eth3
BOOTPROTO=static
IPADDR=192.168.100.8
NETMASK=255.255.255.0
HWADDR=00:AD:BE:EF:99:08
ONBOOT=yes

[root@ic01vn09man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
# Virtio Network Device
DEVICE=eth3
BOOTPROTO=static
IPADDR=192.168.100.9
NETMASK=255.255.255.0
HWADDR=00:AD:BE:EF:99:09
ONBOOT=yes

If I now test the network performance using the iperf tool
(http://sourceforge.net/projects/iperf/)

performance between two guests (iperf server is running on other guest
ic01vn08man/192.168.100.8: ic01vn09man<->  ic01vn08man):

[root@ic01vn09man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
-w 131072
------------------------------------------------------------
Client connecting to 192.168.100.8, TCP port 5001
TCP window size:   256 KByte (WARNING: requested   128 KByte)
------------------------------------------------------------
[  4] local 192.168.100.9 port 34171 connected with 192.168.100.8 port
5001
[  3] local 192.168.100.9 port 34170 connected with 192.168.100.8 port
5001
[  4]  0.0-60.1 sec  2.54 GBytes    363 Mbits/sec
[  3]  0.0-60.1 sec  2.53 GBytes    361 Mbits/sec
[SUM]  0.0-60.1 sec  5.06 GBytes    724 Mbits/sec

results within the same guest (iperf server is running on the same
system: ic01vn08man<->  ic01vn08man):

[root@ic01vn08man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
-w 131072

If you'll drop the -w 131072 you'll get over a 1G performance. Because of bad buffering config you get lots of idle time (check your cpu consumption). Using netperf is more recommended. You can check one of vmware's performance documents and check the huge difference of message size and socket sizes.


------------------------------------------------------------
Client connecting to 192.168.100.8, TCP port 5001
TCP window size:   256 KByte (WARNING: requested   128 KByte)
------------------------------------------------------------
[  4] local 192.168.100.8 port 55418 connected with 192.168.100.8 port
5001
[  3] local 192.168.100.8 port 55417 connected with 192.168.100.8 port
5001
[  3]  0.0-60.0 sec  46.2 GBytes  6.62 Gbits/sec
[  4]  0.0-60.0 sec  45.2 GBytes  6.47 Gbits/sec
[SUM]  0.0-60.0 sec  91.4 GBytes  13.1 Gbits/sec

724 Mbits/sec is far away from what I have assumed. The host system is
connected with 10G ethernet and it would be necessary to have a similar
performance.

Regards
   Martin


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux