Re: [RBD]Replace block device cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In response to my own questions, I read that you shouldn't separate your journal / rocksDB from the disks where your data resides, with bluestore. And the general rule of one core per OSD seems to be unnecessary, since in the current clusters we've got 4 cores with 5 disks and CPU usage never goes over 20-30%.


New questions are if I should separate the admin / monitor nodes from the data storage nodes (separate HDD, or separate machine?). And if I could use a separate machine with an SSD for caching? We can't add SSD's to these dedicated machines. So perhaps then the network will be the bottleneck and no remarkable speed-boost will be noticed.


Back to the interwebz for research 😊



From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Nino Bosteels <n.bosteels@xxxxxxxxxxxxx>
Sent: 19 July 2018 16:01
To: ceph-users@xxxxxxxxxxxxxx
Subject: [ceph-users] [RBD]Replace block device cluster
 

We’re looking to replace our existing RBD cluster, which makes and stores our backups. Atm we’ve got one machine running backuppc, where the RBD is mounted and 8 ceph nodes.

 

The idea is to gain in speed and/or pay less (or pay equally for moar speed).

 

Doubting to get SSD in the mix. Have I understood correctly that it’s useful for setting up a cache pool and / or for separating the journal? Can I use a different server for this?

 

 

Old specs (8 machines):

CPU:      Intel Xeon D1520 4c/8t 2.2 GHz/2.6 GHz

RAM:    32 GB DDR4 ECC 2133 MHz

Disks:    5x 6 TB SAS2

Public network card:      1 x 1  Gbps

 

40 disks, total of 1159.92 euro

 

Consideration for new specs:

3 machines:

CPU:      Intel  Xeon E5-2620v3 - 6c/12t - 2.4GHz /3.2GHz

RAM:    64GB DDR4 ECC 1866 MHz

Disks:    12x 4 TB SAS2

Public network card:      1 x 1  Gbps

 

36 disks for a total of 990 euro

 

10 machines:

CPU:      Intel  Xeon D-1521 - 4c/8t - 2,4GHz /2,7GHz

RAM:    16GB DDR4 ECC 2133MHz

Disks:    4x 6TB

Public network card:      1 x 1  Gbps

 

40 disks for a total of 940 euro

Perhaps in combination with SSD, this last option?!

 

Any advice is greatly appreciated.

 

How do you make your decisions / comparisons? 1 disk per OSD I guess, but  then, how many cores per disk or stuff like that?

 

Thanks in advance.

 

Nino Bosteels

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux