Craig,
Thanks for the info.
I ended up doing a zap and then a create via ceph-deploy.
One question that I still have is surrounding adding the failed osd back into the pool.
In this example...osd.70 was bad....when I added it back in via ceph-deploy...the disk was brought up as osd.108.
Only after osd.108 was up and running did I think to remove osd.70 from the crush map etc.
My question is this...had I removed it from the crush map prior to my ceph-deploy create...should/would Ceph have reused the osd number 70?
I would prefer to replace a failed disk with a new one and keep the old osd assignment...if possible that is why I am asking.
Anyway...thanks again for all the help.
Shain
Sent from my iPhone
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com