Re: Networking Idea/Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dave

That’s the way our cluster is setup. It’s relatively small, 5 hosts, 12 osd’s.  

Each host has 2x10G with LACP to the switches.  We’ve vlan’d public/private networks. 

Making best use of the LACP lag will to a greater extent be down to choosing the best hashing policy.  At the moment we’re using layer3+4 on the Linux config and switch configs.  We’re monitoring link utilisation to make sure the balancing is as close to equal as possible. 

Hope this helps

A

Sent from my iPhone

On 15 Mar 2021, at 16:39, Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

I have client and cluster network on one 10gbit port (with different vlans). 
I think many smaller clusters do this ;)

> 
> I've been thinking about ways to squeeze as much performance as possible
> from the NICs  on a Ceph OSD node.  The nodes in our cluster (6 x OSD, 3
> x MGR/MON/MDS/RGW) currently have 2 x 10GB ports.  Currently, one port
> is assigned to the front-side network, and one to the back-side
> network.  However, there are times when the traffic on one side or the
> other is more intense and might benefit from a bit more bandwidth.
> 
> The idea I had was to bond the two ports together, and to run the
> back-side network in a tagged VLAN on the combined 20GB LACP port.  In
> order to keep the balance and prevent starvation from either side it
> would be necessary to apply some sort of a weighted fair queuing
> mechanism via the 'tc' command.  The idea is that if the client side
> isn't using up the full 10GB/node, and there is a burst of re-balancing
> activity, the bandwidth consumed by the back-side traffic could swell to
> 15GB or more.   Or vice versa.
> 
> From what I have read and studied, these algorithms are fairly
> responsive to changes in load and would thus adjust rapidly if the
> demand from either side suddenly changed.
> 
> Maybe this is a crazy idea, or maybe it's really cool.  Your thoughts?
> 
> Thanks.
> 
> -Dave
> 
> --
> Dave Hall
> Binghamton University
> kdhall@xxxxxxxxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux