Suitable 10G Switches for ceph storage - any recommendations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph users,
I am currently constructing a small hyperconverged Proxmox cluster with
ceph as storage. So far I always had 3 nodes, which I directly linked
together via 2 bonded 10G network interfaces for the Ceph storage, so I
never needed any switching devices.

This new cluster has more nodes, so I am considering using a 10G switch
for the storage network. As I have no experience with such a setup, I
wonder if there are any specific issues that I should think of (latency...)?

As the whole cluster should be not too expensive, I am currently
thinking of the following solution:

2* CRS317-1G-16s+RM switches:
https://mikrotik.com/product/crs317_1g_16s_rm#fndtn-testresults

SFP+ Cables like these:
https://www.fs.com/de/products/48883.html

Some network interface for each node with two SFP+ ports, e.g.:
https://ark.intel.com/content/www/de/de/ark/products/39776/intel-ethernet-converged-network-adapter-x520-da2.html

Connect each port with each switch and configure master/slave
configuration so that the switches are redundant.

What do you think of this setup - or is there any information /
recommendation for an optimized setup of a 10G storage network?

Best Regards,
Hermann

-- 
hermann@xxxxxxx
PGP/GPG: 299893C7 (on keyservers)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux