Re: About 100g network card for ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I would treat having a separate cluster network
> at all as a serious cluster design bug.

I wouldn’t go quite that far, there are still situations where it can be the right thing to do.  Like if one is stuck with only 1GE or 10GE networking, but NICs and switch ports abound.  Then having separate nets, each with bonded links, can make sense.

I’ve also seen network scenarios where bonding isn’t feasible, say a very large cluster where the TORs aren’t redundant.  In such a case, one might reason that decreasing osd_max_markdown_count can improve the impact of flapping, and the impact of the described flapping might be amortized toward the noise floor.

When bonding, always always talk to your networking folks about the right xmit_hash_policy for your deployment.  Suboptimal values rob people of bandwidth all the time.

> Reason: a single faulty NIC or
> cable or switch port on the backend network can bring down the whole
> cluster. This is even documented:
> 
> https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd/#flapping-osds

I love it when people reference stuff I suffer then write about :D. I haven’t seen it bring down as such, but it does impact and can be tricky to troubleshoot if you aren’t looking for it.  The clusters I wrote about there FWIW did have bonded private and public networks, but weren’t very large by modern standards.

> 
> On Thu, Oct 10, 2024 at 3:23 PM Phong Tran Thanh <tranphong079@xxxxxxxxx> wrote:
>> 
>> Hi ceph users
>> 
>> I have a 100G network card with dual ports for a Ceph node with NVMe disks.
>> Should I bond them or not? Should I bond 200G for both the public and
>> cluster networks, or separate it: one port for the public network and one
>> for the cluster?
>> 
>> Thank ceph users
>> --
>> Email: tranphong079@xxxxxxxxx
>> Skype: tranphong079
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> 
> -- 
> Alexander Patrakov
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux