Re: bonding vs 802.3ad/Cisco EtherChannel link agregation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Friesen wrote:
> Cacophonix wrote:
> 
>>--- Chris Friesen <cfriesen@nortelnetworks.com> wrote:
> 
> 
>>>This has always confused me.  Why doesn't the bonding driver try and spread
>>>all the traffic over all the links?
>>
>>Because then you risk heavy packet reordering within an individual flow,
>>which can be detrimental in some cases.
>>--karthik
> 
> 
> I can see how it could make the receiving host work more on reassembly, but if throughput is key,
> wouldn't you still end up better if you can push twice as many packets through the pipe?
> 
> Chris

Also, I notice lots of out-of-order packets on a single gigE link when running at high
speeds (SMP machine), so the kernel is still having to reorder quite a few packets.
Has anyone done any tests to see how much worse it is with dual-port bonding?

NAPI helps my problem, but does not make it go away entirely.

Ben

> 


-- 
Ben Greear <greearb@candelatech.com>       <Ben_Greear AT excite.com>
President of Candela Technologies Inc      http://www.candelatech.com
ScryMUD:  http://scry.wanfear.com     http://scry.wanfear.com/~greear


-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux