Duplicate incoming packets with Bonding Driver v2.6.1 ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello:
 
Im wondering if the community can shed light on a problem
we are seeing when employing the Ethernet Channel Bond
Driver. To help you help me, below you will find a summary
of our findings and configuration.
 
=========================
The problem:
=========================
As captured by tcpdump(1) run against the bond0 interface,
and subsequently analyzed by ethereal, we are seeing
duplicate packets **constantly** -- (and NOT just
shortly after periods of network inactivity, which can
be accounted for as a transient situation resulting from
adjustments to switch MAC address tables).
 
Duplicate packets are only seen when both eth0 & eth1
are up. Conversely, when either eth0 or eth1 is up (but
not both) we do not experience duplicate packets. Again,
the duplicates are seen all the time with both interfaces
up.
 

============================================================
root# tcpdump -i [bond0|eth1] -c 1000 -w /tmp/out.tcpdump
============================================================ 
 
=========================
The O/S Configuration:
=========================
Kernel verion...........: Gentoo sources 2.6.10-r6
Bond Driver Version.....: Ethernet Channel Bonding Driver, v2.6.1
Bond mode...............: Active / Backup
Enslaved Interfaces.....: eth0 & eth1 (intel e1000 NIC's)
 
=========================
The Topology:
=========================
For high availability, eth0 and eth1 are physically connected
to different switches made by "Foundary" (model: Workgroup 448)
(shown below by the letter 'F' enclosed by a box).
Spanning tree is used, with the forced root-brigde being the
CISCO switch.
 
   --------------------------------
  |                                |
 ---       ---       ---          ---
| F |-----| F |-----| F |--------| F |
 ---       ---       ---          ---
  |                  | |           |
  |    ---------     | |   ------  |
   ---|  CISCO  |----   --| HOST |-
       ---------           ------
 

=====================
The traffic:
=====================
Network traffic consists of UDP broadcase packets.
 

=========================
More Information:
=========================
Interestingly enough, with eth0 down (i.e. "ifconfig eth0 down")
and eth1 up, performing a tcpdump on bond0 and eth1 results in
different data sets. On eth1 we only see data leaving the server,
whereas on bond0 we see all data (i.e. data coming in, data going
out, and data just passing by on the wire). Again, eth0
is down during this time (see below). This might be accounted
for if the driver that implements the bond0 interface is hooked
in at a lower level than the driver that implements the eth1
interface (and therefore bond0 can intercept the traffic and do
with it what it needs to), but I'll let the community clarify.
 
bond0     Link encap:Ethernet  HWaddr 00:11:2F:37:C9:41  
          inet addr:10.130.101.5  Bcast:10.130.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
eth0      Link encap:Ethernet  HWaddr 00:11:2F:37:C9:41  
          BROADCAST NOARP SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Base address:0xb000 Memory:f1020000-f1040000 
 
eth1      Link encap:Ethernet  HWaddr 00:11:2F:37:C9:41  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Base address:0xc000 Memory:f4000000-f4020000
 
======================================
Other configuration information:
======================================
---------------------------------------------------
The following code brings up the bond0 interface.
---------------------------------------------------
/sbin/insmod bonding miimon=100 mode=active-backup
/sbin/ifconfig bond0 $IPADDR netmask $MASK broadcast $BCAST 
/sbin/ifenslave bond0 eth0 
/sbin/ifenslave bond0 eth1 
---------------------------------------------------

---------------------------------------------------
root# cat /proc/net/bonding/bond0
(A snapshot when eth0 was ifconfig'ed down).
---------------------------------------------------
Ethernet Channel Bonding Driver: v2.6.1 (October 29, 2004)
 
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
 
Slave Interface: eth0
MII Status: down
Link Failure Count: 26
Permanent HW addr: 00:11:2f:37:c9:41
 
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:11:2f:37:c9:42
---------------------------------------------------
 
 
Thanks in advance
Noelle Milton Vega

-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux