Re: Suitable 10G Switches for ceph storage - any recommendations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Hermann,

Yes, I asked the same question a while ago, and received very valuable advice. We ended up purchasing dual refurb 40G arista's, for very little money compared to new 10G switches.

Ours are these:
https://emxcore.com/shop/category/product/arista-dcs-7050qx-32s/

The complete thread on the subject, including many more recommendations is here:
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/5DH57H4VO2772LTXDVD4APMPK3DRZDKD/#5DH57H4VO2772LTXDVD4APMPK3DRZDKD

Best,
MJ

On 5/19/21 2:10 PM, Max Vernimmen wrote:
Hermann,

I think there was a discussion on recommended switches not too long ago.
You should be able to find it in the mailing list archives.
I think the latency of the network is usually very minor compared to ceph's
dependency on cpu and disk latency, so for a simple cluster I wouldn't
worry about it too much.
I have found fs.com's dac cables to get stuck a lot, so I don't use them
anymore. I usually buy dell or mellanox cables.
Regarding network cards I've found the intel cards to be not that great due
to bugs with lacp bonds, embedded lldp getting in the way and other issues.
So I'm using mellanox cards instead, but broadcom should also work.

hope it helps!

best regards,


Max

On Wed, May 19, 2021 at 1:48 PM <ceph-users-request@xxxxxxx> wrote:

---------- Forwarded message ----------
From: Hermann Himmelbauer <hermann@xxxxxxx>
To: ceph-users@xxxxxxxx
Cc:
Bcc:
Date: Wed, 19 May 2021 11:22:26 +0200
Subject:  Suitable 10G Switches for ceph storage - any
recommendations?
Dear Ceph users,
I am currently constructing a small hyperconverged Proxmox cluster with
ceph as storage. So far I always had 3 nodes, which I directly linked
together via 2 bonded 10G network interfaces for the Ceph storage, so I
never needed any switching devices.

This new cluster has more nodes, so I am considering using a 10G switch
for the storage network. As I have no experience with such a setup, I
wonder if there are any specific issues that I should think of
(latency...)?

As the whole cluster should be not too expensive, I am currently
thinking of the following solution:

2* CRS317-1G-16s+RM switches:
https://mikrotik.com/product/crs317_1g_16s_rm#fndtn-testresults

SFP+ Cables like these:
https://www.fs.com/de/products/48883.html

Some network interface for each node with two SFP+ ports, e.g.:

https://ark.intel.com/content/www/de/de/ark/products/39776/intel-ethernet-converged-network-adapter-x520-da2.html

Connect each port with each switch and configure master/slave
configuration so that the switches are redundant.

What do you think of this setup - or is there any information /
recommendation for an optimized setup of a 10G storage network?

Best Regards,
Hermann

--
hermann@xxxxxxx
PGP/GPG: 299893C7 (on keyservers)


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux