Re: Nic bonding (lacp) settings for ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 28 Jun 2021 22:35:36 +0300
mhnx <morphinwithyou@xxxxxxxxx> wrote:
> To be clear.
> I have stacked switch and this is my configuration.
> 
> Bonding cluster: (hash 3+4)
> Cluster nic1(10Gbe) -> Switch A
> Cluster nic2(10Gbe) -> Switch B
> 
> Bonding public: (hash 3+4)
> Public  nic1(10Gbe) -> Switch A
> Public  nic2(10Gbe) -> Switch B
> 
> Data distribution wasn't good at the begining due to layer2 bonding.
> With the hash3+4 its better now.
> 
> But when I test the network with "iperf -parallel 2" and
> "ad_select=stable" Sometimes it uses both nic, sometimes it uses only
> one nic. After that i changed "ad_select=bandwitdh" and data
> distribution was looking better. Every iperf test was successfull and
> also when one port has some data going on, the next request always
> used the free port. And that's why I'm digging it. If it doesn't have
> any bad side or overhead then test winner is bandwitdh in my tests. I
> will share the test Results in my next mail. PS: How should I test
> latency?

iperf --parallel chooses random ports, so you would get random results
depending on what ports are selected as you enabled layer3+4.

If your switches are stacked and handle bonding across both of them,
which I'm guessing they do, you probably don't need ad_select=bandwidth
for the reasons explained by Andrew.

> I'm not network expert. I'm just trying to understand the concept. My
> switch is layer2+3 TOR switch. I use active-active standart
> port-channel settings. I Wonder that If i dont change switch side to
> 3+4, what will be the effect on the rest?
>  I think TX will share both nic but RX always will be use one nic due
> to switch hash algorithm is differ but its just a guess.

There shouldn't be any problem setting 3+4 on one side and 2 on the
other side, so you can change that setting on your switch without
having to worry about other bonds set up on it break.

-- 
Marc 'risson' Schmitt
CRI - EPITA
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux