Packet loss after configuring Ethernet bonding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All, 


Need help on resolving a issue related to implementing High Availability at network level . I understand that this is not the right forum to ask this question , but since it is related to HA and Linux , I am asking here and I feel somebody here  will have answer to the issues I am facing .

I am trying to implement Ethernet Bonding , Both the interface in my server are connected to two different network switches . 


My configuration is as follows: 


========
# cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: adaptive load balancing Primary Slave: None Currently 
Active Slave: eth0 MII Status: up MII Polling Interval (ms): 0 Up Delay 
(ms): 0 Down Delay (ms): 0

Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link 
Failure Count: 0 Permanent HW addr: e4:e1:5b:d0:11:10 Slave queue ID: 0

Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link 
Failure Count: 0 Permanent HW addr: e4:e1:5b:d0:11:14 Slave queue ID: 0
------------
# cat /sys/class/net/bond0/bonding/mode 

  balance-alb 6


# cat /sys/class/net/bond0/bonding/miimon
   0

============


The issue for me is that I am seeing packet loss after configuring bonding .  Tried connecting both the interface to the same switch , but still seeing the packet loss . Also , tried changing miimon value to 100 , but still seeing the packet loss. 

What I am missing in the configuration ? Any help will be highly appreciated in resolving the problem . 



Thanks
Zaman

-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux