New Ceph cluster design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am planning a new Ceph deployement and I have few questions that I could not find good answers yet.

Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each.
Our target is to use 10TB drives for 120TB capacity per node.

1. We want to have small amount of SSDs in the machines. For OS and I guess for WAL/DB of Bluestore. I am thinking about having a RAID 1 with two 400GB 2.5" SSD drives. Will this fit WAL/DB? We plan to store many small objects.
2. While doing scrub/deep scrub, is there any significant network traffic? Assuming we are using Erasure coding pool, how do the nodes check the consistency of an object? Do they transfer the whole object chunks or do they only transfer the checksums?
3. We have to decide on which HDD to use, and there is a question of HGST vs Seagate, 512e vs 4kn sectors, SATA vs SAS. Do you have some tips for these decisions? We do not have very high IO, so we do not need performance at any cost. As for manufacturer and the sector size, I haven't found any guidelines/benchmarks that would steer me towards any.

Thank you for your insight
Jan


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux