Cameron, Somnath already covered most of these points, but I’ll add my $.02…
The key question to me is this: will these 1TB SSDs perform well as a Journal target for Ceph? They’ll need to be fast at synchronous writes to fill that role, and if they aren’t I would use them for
other OSD-related tasks and get the right SSDs for the journal workload. For more thoughts on the matter, read below…
- 1TB Capacity SSDs for journals is certainly overkill..unless the underlying SSD controller is able to extend the life span of the SSD by using the unallocated portions. I would normally put the extra 950G of capacity
to use, either as a cache tier or isolated pool depending on the workload… but both of those efforts have their own considerations too, especially regarding performance and fault domains, which brings us to...
- Performance is going to vary depending on the SSD you have: is it PCIe, NVMe, SATA, or SAS? The connection type and SSD characteristics need to sustain the amount of bandwidth and IOPS you need for your workload, especially as
you’ll be be doing double writes if you use them as both journals and some kind of OSD storage (either cache tier or dedicated pool). Also, do you *know* if these SSDs handle writes effectively? Many SSDs don’t perform well for the types of journal writes
that Ceph performs. Somnath already mentioned placing the primary OSDs on the spare space - a good way to get a boost in read performance if you ceph architecture will support it.
- Fault domain is another consideration : the more journals you put on one SSD, the larger your fault domain will be. If you have non-Enterprise SSDs this is an important point, as the wrong SSD will die quickly in a busy cluster.
Setting up a Ceph cluster and we want the journals for our spinning disks to be on SSDs but all of our SSDs are 1TB. We were planning on putting 3 journals on each SSD, but that leaves 900+GB unused on
the drive, is it possible to use the leftover space as another OSD or will it affect performance too much?
Thanks,
Cameron Scrace
Infrastructure Engineer
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com