Johannes, Thank you — "osd crush update on start = false” did the trick. I wasn’t aware that ceph has automatic placement logic for OSDs (http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/9035). This brings up a best practice question.. How is the configuration of OSD hosts with multiple storage types (e.g. spinners + flash/ssd), typically implemented in the field from a crush map / device location perspective? Preference is for a scale out design. In addition to the SSDs which are used for a EC cache tier, I’m also planning a 5:1 ratio of spinners to SSD for journals. In this case I want to implement an availability groups within the OSD host itself. e.g. in a 26-drive chassis, there will be 6 SSDs + 20 spinners. [2 SSDs for replicated cache tier, 4 SSDs will create 5 availability groups of 5 spinners each] The idea is to have CRUSH take into account SSD journal failure (affecting 5 spinners). Thanks.
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com