Re: New Ceph cluster design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd increase ram. 1GB per 1TB of disk is the recommendation. 

Another thing you need to consider is your node density. 12x10TB is a lot of data to have to rebalance if you aren't going to have 20+ nodes. I have 17 nodes with 24x6TB disks each. Rebuilds can take what seems like an eternity. It may be worth looking at cheaper sockets and smaller disks in order to increase your node count. 

How many nodes will this cluster have? 


On Mar 9, 2018 4:16 AM, "Ján Senko" <jan.senko@xxxxxxxxx> wrote:
I am planning a new Ceph deployement and I have few questions that I could not find good answers yet.

Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each.
Our target is to use 10TB drives for 120TB capacity per node.

1. We want to have small amount of SSDs in the machines. For OS and I guess for WAL/DB of Bluestore. I am thinking about having a RAID 1 with two 400GB 2.5" SSD drives. Will this fit WAL/DB? We plan to store many small objects.
2. While doing scrub/deep scrub, is there any significant network traffic? Assuming we are using Erasure coding pool, how do the nodes check the consistency of an object? Do they transfer the whole object chunks or do they only transfer the checksums?
3. We have to decide on which HDD to use, and there is a question of HGST vs Seagate, 512e vs 4kn sectors, SATA vs SAS. Do you have some tips for these decisions? We do not have very high IO, so we do not need performance at any cost. As for manufacturer and the sector size, I haven't found any guidelines/benchmarks that would steer me towards any.

Thank you for your insight
Jan



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux