Re: New cluster - configuration tips and reccomendation - NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Massimiliano,

I am a little surprised to see 6x NVMe, 64GB of RAM, 2x100 NICs and E5-2603 v4, that's one of the cheapest E5 Intel CPU mixed with some pretty high end gear, it does not make sense. Wildo's right go with much higher frequency: E5-2637 v4, E5-2643 v4, E5-1660 v4, E5-1650 v4. If you need to go on the cheap, the E3 serie is interesting (E3-1220 v6, E3-1230 v6, ...) if you can work with the limitations: max 64GB of RAM, max 4 cores  and single CPU.

Higher frequency should reduce latency when communicating with NICs and SSDs which benefits Ceph's performance.

100G NICs is overkill for throughput, but it should reduce the latency. 25G NIC are becoming popular for servers (replacing 10G NICs).

Cheers,
Maxime

On Wed, 5 Jul 2017 at 10:55 Massimiliano Cuttini <max@xxxxxxxxxxxxx> wrote:

Dear all,

luminous is coming and sooner we should be allowed to avoid double writing.
This means use 100% of the speed of SSD and NVMe.
Cluster made all of SSD and NVMe will not be penalized and start to make sense.

Looking forward I'm building the next pool of storage which we'll setup on next term.
We are taking in consideration a pool of 4 with the following single node configuration:

  • 2x E5-2603 v4 - 6 cores - 1.70GHz
  • 2x 32Gb of RAM
  • 2x NVMe M2 for OS
  • 6x NVMe U2 for OSD
  • 2x 100Gib ethernet cards

We have yet not sure about which Intel and how much RAM we should put on it to avoid CPU bottleneck.
Can you help me to choose the right couple of CPU?
Did you see any issue on the configuration proposed?


Thanks,
Max

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux