Mark Haney wrote:
Steve Repo wrote:
On Tue, Sep 23, 2008 at 6:38 PM, Mark Haney <mhaney@xxxxxxxxxxxxxxxx>
wrote:
Frank Murphy wrote:
Is this included in fedora 8.
http://www.linuxfoundation.org/en/Net:Bonding
Tried yum info */ifenslave
Frank
Yeah it's included, I use it all the time between GigE ports on my data
servers.
very cool! I'm planning to do something like this but have a questions.
Does bonding really use both interfaces for data transfer or use the
second interface only if the first one is maxed out?
Thanks,
Steve
AFAIK, it's pretty well load balanced in that it looks and acts like one
interface. I believe the algorithm determines which interface is least
used and sends it from there.
It depends entirely on the bonding mode. To summarize:
mode 0: naive round-robin, hammers the CPU, but the only mode that can exceed
wire speed for a single socket. Doesn't scale past two NICs.
mode 1: Only one NIC active at a time. If High Availability is your priority,
use this, because it's least likely to cause problems, since it looks to the
network like just one NIC, because it is.
mode 2: non-LACP trunking, for people with certain older managed switches
mode 3: mirrors all traffic on all interfaces, generally only used to test
networking gear
mode 4: LACP trunking, load-balances across active NICs with help from a managed
switch
mode 5: transmit load balancing, good for servers that receive small requests
and then send large responses
mode 6: active load balancing, uses arp trickery to make peers communicate with
different NICs, only good if most of your traffic is within the subnet
-- Chris
--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines