Re: New cluster - configuration tips and reccomendation - NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2017-07-05 23:22, David Clarke wrote:

On 07/05/2017 08:54 PM, Massimiliano Cuttini wrote:
Dear all,

luminous is coming and sooner we should be allowed to avoid double writing.
This means use 100% of the speed of SSD and NVMe.
Cluster made all of SSD and NVMe will not be penalized and start to make
sense.

Looking forward I'm building the next pool of storage which we'll setup
on next term.
We are taking in consideration a pool of 4 with the following single
node configuration:

  * 2x E5-2603 v4 - 6 cores - 1.70GHz
  * 2x 32Gb of RAM
  * 2x NVMe M2 for OS
  * 6x NVMe U2 for OSD
  * 2x 100Gib ethernet cards

We have yet not sure about which Intel and how much RAM we should put on
it to avoid CPU bottleneck.
Can you help me to choose the right couple of CPU?
Did you see any issue on the configuration proposed?

There are notes on ceph.com regarding flash, and NVMe in particular,
deployments:

http://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

This is a nice link, but the Ceph configuration is a bit dated, it was done for Hammer and a couple of config params were dropped in Jewel. I hope Intel does publish some new settings for Luminous/Bluestore ! 

In addition to tuning ceph.conf, sysstl, udev, it is important to run stress benchmarks such as rados bench/ rbd bench and measure the system load via atop/collectl/sysstat. This will tell you where your bottlenecks are like. If you will do many tests, you may find the CBT Ceph Benchmarking Tool handy as you can script incremental tests. 

 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux