Re: 6 Node cluster with 24 SSD per node: Hardwareplanning/ agreement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Denny Fuchs
> Sent: 05 October 2016 12:43
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  6 Node cluster with 24 SSD per node: Hardwareplanning/ agreement
> 
> hi,
> 
> I get a call from Mellanox and we get now a offer for the following
> network:
> 
> * 2 x SN2100 100Gb/s Switch 16 ports
> * 10 x ConnectX 4LX-EN 25Gb card for hypervisor and OSD nodes
> * 4 x Adapter from Mellanox QSA to SFP+ port for interconnecting to our HP 2920 switches
> * 3 x Copper split cables 1 x 100Gb -> 4 x 25Gb

Even better than 10G, 25GB is clocked faster than 10GB, so you should see slightly lower latency vs 10G. Just make sure the kernel
you will be using will support those Nics.

> 
> 
> So, if the price fits, that should be O.K for anything else ....  :-)
> 
> cu denny
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux