Re: virtio bonding bandwidth problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/01/10 23:33, Brian Jackson wrote:


>> 1. I am experiencing a 40% performance hit (600 Mb/s) on each individual
>> virtio guest connection ;
>>     
> I don't know what all features RHEL5.4 enables for kvm, but that doesn't seem 
> outside the realm of possibility. Especially depending on what OS is running 
> in the guest. I think RHEL5.4 has an older version of virtio, but I won't 
> swear to it. Fwiw, I get ~1.5Gbps guest to host on a Ubuntu 9.10 guest, 
> ~850mbit/s guest to host on a Windows 7 guest. To get those speeds, I have to 
> up the window sizes a good bit (the default is 8K, those numbers are at 1M). 
> At the default Windows 7 gets ~250mbit/s. 
>   

Thank you, Brian ; your reply is much appreciated.



I took a shortcut, and installed RHEL's ktune package (on both hosts and
guests), which tunes the following parameters :

net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 10000
net.ipv4.tcp_rmem = 8192 87380 8388608
net.ipv4.tcp_wmem = 8192 65536 8388608
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.udp_mem = 8388608 12582912 16777216
vm.swappiness = 30
vm.dirty_ratio = 50
vm.pagecache = 90


Thanks to these changes, external host to guest is up from ~600 Mb/s to
the expected ~950 Mb/s (guest to bare metal is still ~1.2Gb/s).



>> 2. Total simultaneous bandwidth to all guests seems to be capped at 1
>> Gb/s ; quite problematic, as this renders my server consolidation almost
>> useless.
>>     
> I don't know about 802.3ad bonding, but I know the other linux bonding 
> techniques are very hard to benchmark due to the way the mac's are handled. I 
> would start by examining/describing your testing a little more. At the very 
> least what tools you're using to test, etc. would be helpful.
>   

- I am using iperf-2.0.4 for bandwidth testing :
"iperf -s" on server side (TCP window size: 85.3 KB)
"iperf -c" on client side (TCP window size: 27.5 or 64.0 KB)

- Network :
* 2x interconnected Allied Telesis AT-x908 (with 60 Gb/s switching
backplane)
* hostA , 3x 1 Gb 802.3ad , with bridged virtio guests :
virtA1-virtA2-virtA3
* hostB , 3x 1 Gb 802.3ad
* hostC , 1 Gb
* hostD , 1 Gb

- Tests (simultaneous connections from clients -> servers in Mb/s) :
1. B,C,D -> A   :   990,600,700 Mb/s = 2.3 Gb/s , which confirms a
successful 802.3ad setup for hostA
2. B,B,B -> A   :   450,300,250 Mb/s = 1 Gb/s : due to MAC handling,
maximum bandwidth is maybe limited to 1 Gb/s per host-host connection ?
3. B -> A1 : 980 Mb/s
4. C -> A2 : 650 Mb/s
5. B,B -> A1,A2 : 340,650 Mb/s (limited host-host ?)
6. B,C -> A1,A2 : 900,100 Mb/s
7. B,C,D -> A1,A2,A3 : 750,150,100 Mb/s
8. A1,A2 -> B,B : 500,500 Mb/s (limited host-host ?)
9. A1,A2 -> B,C : 980,730 Mb/s

- Results [2], [5] and [8]  seem to imply a 1 Gb bandwidth limit for a
single physical client-server connection ;
- Result [9] indicates > 1Gb/s bandwidth for outgoing virtio client
guest connections (to different servers) ;

- Results [6] and [7] illustrate the problem : incoming bandwidth to all
virtio guests is capped at 1 Gb/s.



Best regards,
Didier
-- 

Didier Moens , IT services
Department for Molecular Biomedical Research (DMBR)
VIB - Ghent University


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux