On 24/06/2013 18:41, John Nielsen wrote:
The official documentation is maybe not %100 idiot-proof, but it is step-by-step:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
If you lose a disk you want to remove the OSD associated with it. This will trigger a data migration so you are back to full redundancy as soon as it finishes. Whenever you get a replacement disk, you will add an OSD for it (the same as if you were adding an entirely new disk). This will also trigger a data migration so the new disk will be utilized immediately.
If you have a spare or replacement disk immediately after a disk goes bad, you could maybe save some data migration by doing the removal and re-adding within a short period of time, but otherwise "drive replacement" looks exactly like retiring an OSD and adding a new one that happens to use the same drive slot.
That's good, thank you. So I think it's something like this:
* Remove OSD
* Unmount filesystem (forcibly if necessary)
* Replace drive
* mkfs filesystem
* mount it on /var/lib/ceph/osd/ceph-{osd-number}
* Start OSD
Would you typically reuse the same OSD number?
One other thing I'm not clear about. At
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual
it says to mkdir the mountpoint, mkfs and mount the filesystem.
But at
http://ceph.com/docs/master/start/quick-ceph-deploy/#add-osds-on-standalone-disks
it says to use "ceph-deploy osd prepare" and "ceph-deploy osd activate",
or the one-step version
"ceph-deploy osd create"
Is ceph-deploy doing the same things? Could I make a shorter
disk-replacement procedure which uses ceph-deploy?
Thanks,
Brian.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com