On Mon, 2009-07-06 at 14:53 +0300, Dor Laor wrote: > On 07/06/2009 12:34 PM, Martin Petermann wrote: > > I'm currently looking at the network performance between two KVM guests > > running on the same host. The host system is applied with two quad core > > Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests, > > enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5) > > on all the three systems: > > > > ____________________ ____________________ > > | | | | > > | KVM guest | | KVM guest | > > | ic01vn08man | | ic01vn09man | > > |____________________| |____________________| > > \ / > > \ / > > \ / > > \ / > > \ / > > ____\________/______ > > | | > > | KVM host | > > |ethernet bridge: br3| > > |____________________| > > > > > > On the host I've created a network bridge in the following way > > > > [root@ic01in01man ~]# cat /etc/sysconfig/network-scripts/ifcfg-br3 > > DEVICE=br3 > > TYPE=Bridge > > ONBOOT=yes > > > > and installed the bridge with the commands > > > > brctl addbr br3 > > ifconfig br3 up > > > > Within the configuration files of the KVM guests I added the following > > sections: > > > > ic01vn08man.xml; > > ... > > <interface type='bridge'> > > <source bridge='br3'/> > > <model type='virtio' /> > > <mac address="00:ad:be:ef:99:08"/> > > </interface> > > ... > > > > ic01vn09man.xml > > ... > > <interface type='bridge'> > > <source bridge='br3'/> > > <model type='virtio' /> > > <mac address="00:ad:be:ef:99:09"/> > > </interface> > > ... > > > > Within the guests I have configured the network in the following way: > > > > [root@ic01vn08man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3 > > # Virtio Network Device > > DEVICE=eth3 > > BOOTPROTO=static > > IPADDR=192.168.100.8 > > NETMASK=255.255.255.0 > > HWADDR=00:AD:BE:EF:99:08 > > ONBOOT=yes > > > > [root@ic01vn09man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3 > > # Virtio Network Device > > DEVICE=eth3 > > BOOTPROTO=static > > IPADDR=192.168.100.9 > > NETMASK=255.255.255.0 > > HWADDR=00:AD:BE:EF:99:09 > > ONBOOT=yes > > > > If I now test the network performance using the iperf tool > > (http://sourceforge.net/projects/iperf/) > > > > performance between two guests (iperf server is running on other guest > > ic01vn08man/192.168.100.8: ic01vn09man<-> ic01vn08man): > > > > [root@ic01vn09man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m > > -w 131072 > > ------------------------------------------------------------ > > Client connecting to 192.168.100.8, TCP port 5001 > > TCP window size: 256 KByte (WARNING: requested 128 KByte) > > ------------------------------------------------------------ > > [ 4] local 192.168.100.9 port 34171 connected with 192.168.100.8 port > > 5001 > > [ 3] local 192.168.100.9 port 34170 connected with 192.168.100.8 port > > 5001 > > [ 4] 0.0-60.1 sec 2.54 GBytes 363 Mbits/sec > > [ 3] 0.0-60.1 sec 2.53 GBytes 361 Mbits/sec > > [SUM] 0.0-60.1 sec 5.06 GBytes 724 Mbits/sec > > > > results within the same guest (iperf server is running on the same > > system: ic01vn08man<-> ic01vn08man): > > > > [root@ic01vn08man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m > > -w 131072 > > If you'll drop the -w 131072 you'll get over a 1G performance. Because > of bad buffering config you get lots of idle time (check your cpu > consumption). > Using netperf is more recommended. You can check one of vmware's > performance documents and check the huge difference of message size and > socket sizes. > Thanks for your answer. If I remove the "-w" specification I can see a throughput of about 1.2 Gbits/sec. Using netperf I can see a similar throughput: [root@ic01vn09man netperf-2.4.5]# netperf -f g -H 192.168.100.8 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.100.8 (192.168.100.8) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^9bits/sec 87380 16384 16384 10.01 1.40 Also changing the the message size and socket size options does not help here. What is limiting the network performance of the guests to something more than 1Gbit/sec? The performance data from the performance document you have mentioned (10Gbps Networking Performance) shows much better values even with MTU=1500 and appropriate values for socket size and message size. > > > ------------------------------------------------------------ > > Client connecting to 192.168.100.8, TCP port 5001 > > TCP window size: 256 KByte (WARNING: requested 128 KByte) > > ------------------------------------------------------------ > > [ 4] local 192.168.100.8 port 55418 connected with 192.168.100.8 port > > 5001 > > [ 3] local 192.168.100.8 port 55417 connected with 192.168.100.8 port > > 5001 > > [ 3] 0.0-60.0 sec 46.2 GBytes 6.62 Gbits/sec > > [ 4] 0.0-60.0 sec 45.2 GBytes 6.47 Gbits/sec > > [SUM] 0.0-60.0 sec 91.4 GBytes 13.1 Gbits/sec > > > > 724 Mbits/sec is far away from what I have assumed. The host system is > > connected with 10G ethernet and it would be necessary to have a similar > > performance. > > > > Regards > > Martin > > > > > > -- > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html