Re: Nic bonding (lacp) settings for ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the answer.
I'm into ad_select bandwitdh because we use osd nodes as rgw gateways, VMs
and different applications.

I have seperate cluster (10+10Gbe) and public (10+10Gbe) network.
I tested stable, bandwitdh and count. Results are clearly good with
bandwitdh. Count is the worst option.
But I wonder if bandwitdh calculation has any effect on the network delay?
If it is then I will return to stable. I don't know now but when i think
about it if every time bonding driver needs to calculate bandwitdh and
decide it should add some cpu power and delay. If it has no effect then
bandwitdh will improve distribution better.

Now I know that I have to use 3+4 but still couldn't decide on ad_select.
Bandwitdh or stable?
Can we discuss it please?

28 Haz 2021 Pzt 20:15 tarihinde Marc 'risson' Schmitt <risson@xxxxxxxxxxxx>
şunu yazdı:

> Hi,
>
> On Sat, 26 Jun 2021 16:47:19 +0300
> mhnx <morphinwithyou@xxxxxxxxx> wrote:
> > I've changed ad_select to bandwitdh and both nic is in use now but
> > layer2 hash prevents dual nic usage for between two nodes (because
> > layer2 using only Mac ).
>
> As I understand it, setting ad_select to bandwidth is only going to be
> useful if you have several link aggregates in the same bond, like when
> you are connected in LACP to multiple (non-stacked) switches.
>
> > People advice using layer2+3 for best performance but it has no
> > effect on osds because mac and ip is the same.
> > I've tried layer3+4 to split by ports instead mac and it works. But i
> > dont know what will the effect and also my switch is layer2.
>
> We are setting layer3+4 on both our servers and our switches.
>
> Regards,
>
> --
> Marc 'risson' Schmitt
> CRI - EPITA
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux