_______________________________________________Hi,
I’m working into improving the costs of our actual ceph cluster. We actually keep 3 x replicas, all of them in SSDs (That cluster hosts several hundred VMs RBD disks) and lately I’ve been wondering if the following setup would make sense, in order to improve cost / performance.
The ideal would be to move PG primaries to high performance nodes using NVMe, keep secondary replica in SSDs and move the third replica to HDDs.
Most probably the hardware will be:
1st Replica: Intel P4500 NVMe (2TB)
2nd Replica: Intel S3520 SATA SSD (1.6TB)
3rd Replica: WD Gold Harddrives (2 TB) (I’m considering either 1TB o 2TB model, as I want to have as many spins as possible)
Also, hosts running OSDs would have a quite different HW configuration (In our experience NVMe need crazy CPU power in order to get the best out of them)
I know the NVMe and SATA SSD replicas will work, no problem about that (We’ll just adjust the primary affinity and crushmap in order to have the desired data layoff + primary OSDs) what I’m worried is about the HDD replica.
Also the pool will have min_size 1 (Would love to use min_size 2, but it would kill latency times) so, even if we have to do some maintenance in the NVMe nodes, writes to HDDs will be always “lazy”.
Before bluestore (we are planning to move to luminous most probably by the end of the year or beginning 2018, once it is released and tested properly) I would just use SSD/NVMe journals for the HDDs. So, all writes would go to the SSD journal, and then moved to the HDD. But now, with Bluestore I don’t think that’s an option anymore.
What I’m worried is how would affect to the NVMe primary OSDs having a quite slow third replica. WD Gold hard drives seem quite decent (For a SATA drive) but obviously performance is nowhere near to SSDs or NVMe.
So, what do you think? Does anybody have some opinions or experience he would like to share?
Thanks!
Xavier.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com