Cameron, Generally, it’s not a good idea.
You want to protect your SSDs used as journal.If any problem on that disk, you will be losing all of your dependent OSDs. I don’t think a bigger journal will gain you much performance , so, default 5 GB journal size should be good enough. If you want to reduce the fault domain
and want to put 3 journals on a SSD , go for minimum size and high endurance SSDs for that. Now, if you want to use your rest of space of 1 TB ssd, creating just OSDs will not gain you much (rather may get some burst performance). You may want to consider
the following. 1. If your spindle OSD size is much bigger than 900 GB , you don’t want to make all OSDs of similar sizes, cache pool could be one of your option. But, remember,
cache pool can wear out your SSDs faster as presently I guess it is not optimizing the extra writes. Sorry, I don’t have exact data as I am yet to test that out. 2. If you want to make all the OSDs of similar sizes and you will be able to create a substantial number of OSDs with your unused SSDs (depends on how big the
cluster is), you may want to put all of your primary OSDs to SSD and gain significant performance boost for read. Also, in this case, I don’t think you will be getting any burst performance. Thanks & Regards Somnath From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Cameron.Scrace@xxxxxxxxxxxx Setting up a Ceph cluster and we want the journals for our spinning disks to be on SSDs but all of our SSDs are 1TB. We were planning on putting 3 journals on each SSD, but
that leaves 900+GB unused on the drive, is it possible to use the leftover space as another OSD or will it affect performance too much?
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com