Re: New cluster - configuration tips and reccomendation - NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5 July 2017 at 19:54, Wido den Hollander <wido@xxxxxxxx> wrote:
I'd probably stick with 2x10Gbit for now and use the money I saved on more memory and faster CPUs.

On the latency point. - you will get an improvement going from 10Gb to 25Gb, but stepping up to 100Gb won't significantly change things as 100Gb = 4x25Gb lanes.

If I had to buy this sort of cluster today I'd probably look at a multi-node chassis like Dell's C6320, that holds 4x 2-socket E5v4 nodes, each of which can take 4x NVMe SSDs, I'm not sure of the exact network daughter card options available in that configuration but they typically have Mellanox options which would open up a 2x25Gb NIC option. At least this way you are managing to get a reasonable number of storage devices per rack unit, but still a terrible use of space compared to a dedicated flash jbod array thing.

Also not sure if the end-to-end storage op latencies achieved with NVMe versus SAS or SATA SSDs in a Ceph cluster really make it that much better... would be interested to hear about any comparisons!

Wait another month and there should be a whole slew of new Intel and AMD platform choices available on the market.

--
Cheers,
~Blairo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux