Jay Vosburgh schrieb: > Another similar Rube Goldberg sort of scheme I've set up in the > past (in the lab, for bonding testing, not in a production environment, > your mileage may vary, etc, etc) is to dedicate particular switch ports > to particular vlans. So, e.g., > > linux box eth0 ---- port 1:vlan 99 SWITCH(ES) port2:vlan 99 ---- eth0 linux box > bond0 eth1 ---- port 3:vlan 88 SWITCH(ES) port4:vlan 88 ---- eth1 bond0 > > This sort of arrangement requires setting the Cisco switch ports > to be native to a particular vlan, e.g., "switchport mode access", > "switchport access vlan 88". Theoretically, the intervening switches > will simply pass the vlan traffic through and not decapsulate it until > it reaches its end destination port. You might also have to fool with > the inter-switch links to make sure they're trunking properly (to pass > the vlan traffic). I was able to test the above setup now. Both eth2 interfaces are in vlan 801, each eth3 interface is in vlan 801. bonding is configured in round robin mode, net.ipv4.tcp_reordering = 127. As a basic test I disabled bonding and did a prallel benchmark over both vlans (192.168.1.0/24 + 10.10.0.1/24). # ./linux-i386 -t 10.10.0.1 NETIO - Network Throughput Benchmark, Version 1.26 (C) 1997-2005 Kai Uwe Rommel TCP connection established. Packet size 1k bytes: 73444 KByte/s Tx, 72705 KByte/s Rx. Packet size 2k bytes: 73733 KByte/s Tx, 71534 KByte/s Rx. Packet size 4k bytes: 73418 KByte/s Tx, 72074 KByte/s Rx. Packet size 8k bytes: 73458 KByte/s Tx, 71962 KByte/s Rx. Packet size 16k bytes: 73113 KByte/s Tx, 72132 KByte/s Rx. Packet size 32k bytes: 72719 KByte/s Tx, 73442 KByte/s Rx. Done. # ./linux-i386 -t 192.168.1.1 NETIO - Network Throughput Benchmark, Version 1.26 (C) 1997-2005 Kai Uwe Rommel TCP connection established. Packet size 1k bytes: 74130 KByte/s Tx, 71282 KByte/s Rx. Packet size 2k bytes: 73188 KByte/s Tx, 71663 KByte/s Rx. Packet size 4k bytes: 73321 KByte/s Tx, 72349 KByte/s Rx. Packet size 8k bytes: 73080 KByte/s Tx, 72272 KByte/s Rx. Packet size 16k bytes: 73032 KByte/s Tx, 72307 KByte/s Rx. Packet size 32k bytes: 72995 KByte/s Tx, 72132 KByte/s Rx. Done. This is not 2 x GbE, but but more than just one interface. Next I enabled bonding and repeated the test over the bond0 interfaces. # ./linux-i386 -t 10.60.1.244 NETIO - Network Throughput Benchmark, Version 1.26 (C) 1997-2005 Kai Uwe Rommel TCP connection established. Packet size 1k bytes: 113469 KByte/s Tx, 113990 KByte/s Rx. Packet size 2k bytes: 112990 KByte/s Tx, 114107 KByte/s Rx. Packet size 4k bytes: 110997 KByte/s Tx, 114269 KByte/s Rx. Packet size 8k bytes: 113337 KByte/s Tx, 114338 KByte/s Rx. Packet size 16k bytes: 113587 KByte/s Tx, 113920 KByte/s Rx. Packet size 32k bytes: 113249 KByte/s Tx, 114354 KByte/s Rx. Done. Now I get only the speed of one GbE interface again. ifstat on server a (netio server): eth2 eth3 KB/s in KB/s out KB/s in KB/s out 120257.6 6419.24 120143.6 6416.79 0.00 0.00 0.00 0.00 58908.95 67127.21 56951.31 69093.78 0.00 0.00 0.00 0.00 6277.72 119635.0 6277.95 119910.9 0.00 0.00 0.00 0.00 6306.51 120092.4 6309.26 119892.6 0.00 0.00 0.00 0.00 2945.82 55833.14 2832.18 54014.88 0.00 0.00 0.00 0.00 ifstat on server b (netio "client"): eth2 eth3 KB/s in KB/s out KB/s in KB/s out 6339.45 119813.5 6361.06 119714.8 0.00 0.00 0.00 0.00 8852.77 117313.6 14954.50 111191.7 0.00 0.00 0.00 0.00 119485.3 6268.16 119901.3 6270.50 0.00 0.00 0.00 0.00 120151.5 6305.75 119914.7 6309.08 0.00 0.00 0.00 0.00 117493.9 6179.55 111202.9 5838.42 0.00 0.00 0.00 0.00 It seems that the traffic is equllay shared over both interfaces. Only two switches with the vlans are involved (two buildings). Any ideas? Is this the performance I should get from (nearly) 2x GbE with packet reordering in mind? Ralf _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc