Gregory Farnum <greg at ...> writes: > ...and one more time, because apparently my brain's out to lunch today: > > ceph osd tree > > *sigh* > haha, we all have those days. [root at monitor01 ceph]# ceph osd tree # id weight type name up/down reweight -1 14.48 root default -2 7.24 host ceph01 0 2.72 osd.0 up 1 1 0.9 osd.1 up 1 2 0.9 osd.2 up 1 3 2.72 osd.3 up 1 -3 7.24 host ceph02 4 2.72 osd.4 up 1 5 0.9 osd.5 up 1 6 0.9 osd.6 up 1 7 2.72 osd.7 up 1 I notice that the weights are all over the place. I was planning on the following once I got things going. 6 1tb ssd osd's (across 3 hosts) as a writeback cache pool, and 6 3tb sata's behind them in another pool for data that isn't accessed as often.